Sample records for density estimation methods

  1. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  2. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope

  3. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect.

    PubMed

    Ku, Bon Ki; Evans, Douglas E

    2012-04-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as "Maynard's estimation method") is used. Therefore, it is necessary to quantitatively investigate how much the Maynard's estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard's estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard's estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard's estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles.

  4. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect

    PubMed Central

    Ku, Bon Ki; Evans, Douglas E.

    2015-01-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard’s estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles. PMID:26526560

  5. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.

  6. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    PubMed

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  7. Comparing methods to estimate Reineke’s maximum size-density relationship species boundary line slope

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2010-01-01

    Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...

  8. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  9. Trunk density profile estimates from dual X-ray absorptiometry.

    PubMed

    Wicke, Jason; Dumas, Geneviève A; Costigan, Patrick A

    2008-01-01

    Accurate body segment parameters are necessary to estimate joint loads when using biomechanical models. Geometric methods can provide individualized data for these models but the accuracy of the geometric methods depends on accurate segment density estimates. The trunk, which is important in many biomechanical models, has the largest variability in density along its length. Therefore, the objectives of this study were to: (1) develop a new method for modeling trunk density profiles based on dual X-ray absorptiometry (DXA) and (2) develop a trunk density function for college-aged females and males that can be used in geometric methods. To this end, the density profiles of 25 females and 24 males were determined by combining the measurements from a photogrammetric method and DXA readings. A discrete Fourier transformation was then used to develop the density functions for each sex. The individual density and average density profiles compare well with the literature. There were distinct differences between the profiles of two of participants (one female and one male), and the average for their sex. It is believed that the variations in these two participants' density profiles were a result of the amount and distribution of fat they possessed. Further studies are needed to support this possibility. The new density functions eliminate the uniform density assumption associated with some geometric models thus providing more accurate trunk segment parameter estimates. In turn, more accurate moments and forces can be estimated for the kinetic analyses of certain human movements.

  10. Ring profiler: a new method for estimating tree-ring density for improved estimates of carbon storage

    Treesearch

    David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog

    2012-01-01

    Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...

  11. Double sampling to estimate density and population trends in birds

    USGS Publications Warehouse

    Bart, Jonathan; Earnst, Susan L.

    2002-01-01

    We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

  12. Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery

    PubMed Central

    Liu, Shouyang; Baret, Fred; Andrieu, Bruno; Burger, Philippe; Hemmerlé, Matthieu

    2017-01-01

    Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages. PMID:28559901

  13. The "Tracked Roaming Transect" and distance sampling methods increase the efficiency of underwater visual censuses.

    PubMed

    Irigoyen, Alejo J; Rojo, Irene; Calò, Antonio; Trobbiani, Gastón; Sánchez-Carnero, Noela; García-Charton, José A

    2018-01-01

    Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1-31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC.

  14. The "Tracked Roaming Transect" and distance sampling methods increase the efficiency of underwater visual censuses

    PubMed Central

    2018-01-01

    Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer’s experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1–31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC. PMID:29324887

  15. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  16. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  17. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  18. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Izbicki, R.; Lee, A. B.

    2017-07-01

    Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.

  19. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less

  20. Workshop on the Detection, Classification, Localization and Density Estimation of Marine Mammals Using Passive Acoustics - 2015

    DTIC Science & Technology

    2015-09-30

    together the research community working on marine mammal acoustics to discuss detection, classification, localization and density estimation methods...and Density Estimation of Marine Mammals Using Passive Acoustics - 2015 John A. Hildebrand Scripps Institution of Oceanography UCSD La Jolla...dclde LONG-TERM GOALS The goal of this project was to bring together the community of researchers working on methods for detection

  1. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  2. Breast density estimation from high spectral and spatial resolution MRI

    PubMed Central

    Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.

    2016-01-01

    Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (p<0.0001) was obtained between left and right breast density estimations. An interclass correlation coefficient of 0.99 (p<0.0001) indicated high reliability for the inter-user variability of the HiSS-based breast density estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590

  3. Evaluating analytical approaches for estimating pelagic fish biomass using simulated fish communities

    USGS Publications Warehouse

    Yule, Daniel L.; Adams, Jean V.; Warner, David M.; Hrabik, Thomas R.; Kocovsky, Patrick M.; Weidel, Brian C.; Rudstam, Lars G.; Sullivan, Patrick J.

    2013-01-01

    Pelagic fish assessments often combine large amounts of acoustic-based fish density data and limited midwater trawl information to estimate species-specific biomass density. We compared the accuracy of five apportionment methods for estimating pelagic fish biomass density using simulated communities with known fish numbers that mimic Lakes Superior, Michigan, and Ontario, representing a range of fish community complexities. Across all apportionment methods, the error in the estimated biomass generally declined with increasing effort, but methods that accounted for community composition changes with water column depth performed best. Correlations between trawl catch and the true species composition were highest when more fish were caught, highlighting the benefits of targeted trawling in locations of high fish density. Pelagic fish surveys should incorporate geographic and water column depth stratification in the survey design, use apportionment methods that account for species-specific depth differences, target midwater trawling effort in areas of high fish density, and include at least 15 midwater trawls. With relatively basic biological information, simulations of fish communities and sampling programs can optimize effort allocation and reduce error in biomass estimates.

  4. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  5. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  6. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  7. Estimation of tiger densities in India using photographic captures and recaptures

    USGS Publications Warehouse

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  8. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  9. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

  10. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  11. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  12. An analytical framework for estimating aquatic species density from environmental DNA

    USGS Publications Warehouse

    Chambert, Thierry; Pilliod, David S.; Goldberg, Caren S.; Doi, Hideyuki; Takahara, Teruhiko

    2018-01-01

    Environmental DNA (eDNA) analysis of water samples is on the brink of becoming a standard monitoring method for aquatic species. This method has improved detection rates over conventional survey methods and thus has demonstrated effectiveness for estimation of site occupancy and species distribution. The frontier of eDNA applications, however, is to infer species density. Building upon previous studies, we present and assess a modeling approach that aims at inferring animal density from eDNA. The modeling combines eDNA and animal count data from a subset of sites to estimate species density (and associated uncertainties) at other sites where only eDNA data are available. As a proof of concept, we first perform a cross-validation study using experimental data on carp in mesocosms. In these data, fish densities are known without error, which allows us to test the performance of the method with known data. We then evaluate the model using field data from a study on a stream salamander species to assess the potential of this method to work in natural settings, where density can never be known with absolute certainty. Two alternative distributions (Normal and Negative Binomial) to model variability in eDNA concentration data are assessed. Assessment based on the proof of concept data (carp) revealed that the Negative Binomial model provided much more accurate estimates than the model based on a Normal distribution, likely because eDNA data tend to be overdispersed. Greater imprecision was found when we applied the method to the field data, but the Negative Binomial model still provided useful density estimates. We call for further model development in this direction, as well as further research targeted at sampling design optimization. It will be important to assess these approaches on a broad range of study systems.

  13. A method of estimating log weights.

    Treesearch

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  14. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  15. Monte Carlo Approach for Estimating Density and Atomic Number From Dual-Energy Computed Tomography Images of Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Victor, Rodolfo A.; Prodanović, Maša.; Torres-Verdín, Carlos

    2017-12-01

    We develop a new Monte Carlo-based inversion method for estimating electron density and effective atomic number from 3-D dual-energy computed tomography (CT) core scans. The method accounts for uncertainties in X-ray attenuation coefficients resulting from the polychromatic nature of X-ray beam sources of medical and industrial scanners, in addition to delivering uncertainty estimates of inversion products. Estimation of electron density and effective atomic number from CT core scans enables direct deterministic or statistical correlations with salient rock properties for improved petrophysical evaluation; this condition is specifically important in media such as vuggy carbonates where CT resolution better captures core heterogeneity that dominates fluid flow properties. Verification tests of the inversion method performed on a set of highly heterogeneous carbonate cores yield very good agreement with in situ borehole measurements of density and photoelectric factor.

  16. Method for Estimating the Charge Density Distribution on a Dielectric Surface.

    PubMed

    Nakashima, Takuya; Suhara, Hiroyuki; Murata, Hidekazu; Shimoyama, Hiroshi

    2017-06-01

    High-quality color output from digital photocopiers and laser printers is in strong demand, motivating attempts to achieve fine dot reproducibility and stability. The resolution of a digital photocopier depends on the charge density distribution on the organic photoconductor surface; however, directly measuring the charge density distribution is impossible. In this study, we propose a new electron optical instrument that can rapidly measure the electrostatic latent image on an organic photoconductor surface, which is a dielectric surface, as well as a novel method to quantitatively estimate the charge density distribution on a dielectric surface by combining experimental data obtained from the apparatus via a computer simulation. In the computer simulation, an improved three-dimensional boundary charge density method (BCM) is used for electric field analysis in the vicinity of the dielectric material with a charge density distribution. This method enables us to estimate the profile and quantity of the charge density distribution on a dielectric surface with a resolution of the order of microns. Furthermore, the surface potential on the dielectric surface can be immediately calculated using the obtained charge density. This method enables the relation between the charge pattern on the organic photoconductor surface and toner particle behavior to be studied; an understanding regarding the same may lead to the development of a new generation of higher resolution photocopiers.

  17. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    PubMed

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  18. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  19. DS — Software for analyzing data collected using double sampling

    USGS Publications Warehouse

    Bart, Jonathan; Hartley, Dana

    2011-01-01

    DS analyzes count data to estimate density or relative density and population size when appropriate. The software is available at http://iwcbm.dev4.fsr.com/IWCBM/default.asp?PageID=126. The software was designed to analyze data collected using double sampling, but it also can be used to analyze index data. DS is not currently configured to apply distance methods or methods based on capture-recapture theory. Double sampling for the purpose of this report means surveying a sample of locations with a rapid method of unknown accuracy and surveying a subset of these locations using a more intensive method assumed to yield unbiased estimates. "Detection ratios" are calculated as the ratio of results from rapid surveys on intensive plots to the number actually present as determined from the intensive surveys. The detection ratios are used to adjust results from the rapid surveys. The formula for density is (results from rapid survey)/(estimated detection ratio from intensive surveys). Population sizes are estimated as (density)(area). Double sampling is well-established in the survey sampling literature—see Cochran (1977) for the basic theory, Smith (1995) for applications of double sampling in waterfowl surveys, Bart and Earnst (2002, 2005) for discussions of its use in wildlife studies, and Bart and others (in press) for a detailed account of how the method was used to survey shorebirds across the arctic region of North America. Indices are surveys that do not involve complete counts of well-defined plots or recording information to estimate detection rates (Thompson and others, 1998). In most cases, such data should not be used to estimate density or population size but, under some circumstances, may be used to compare two densities or estimate how density changes through time or across space (Williams and others, 2005). The Breeding Bird Survey (Sauer and others, 2008) provides a good example of an index survey. Surveyors record all birds detected but do not record any information, such as distance or whether each bird is recorded in subperiods, that could be used to estimate detection rates. Nonetheless, the data are widely used to estimate temporal trends and spatial patterns in abundance (Sauer and others, 2008). DS produces estimates of density (or relative density for indices) by species and stratum. Strata are usually defined using region and habitat but other variables may be used, and the entire study area may be classified as a single stratum. Population size in each stratum and for the entire study area also is estimated for each species. For indices, the estimated totals generally are only useful if (a) plots are surveyed so that densities can be calculated and extrapolated to the entire study area and (b) if the detection rates are close to 1.0. All estimates are accompanied by standard errors (SE) and coefficients of variation (CV, that is, SE/estimate).

  20. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  1. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. © 2012 The Authors. Biological Reviews © 2012 Cambridge Philosophical Society.

  2. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  3. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. Evaluation of line transect sampling based on remotely sensed data from underwater video

    USGS Publications Warehouse

    Bergstedt, R.A.; Anderson, D.R.

    1990-01-01

    We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.

  5. Breast Density Estimation with Fully Automated Volumetric Method: Comparison to Radiologists' Assessment by BI-RADS Categories.

    PubMed

    Singh, Tulika; Sharma, Madhurima; Singla, Veenu; Khandelwal, Niranjan

    2016-01-01

    The objective of our study was to calculate mammographic breast density with a fully automated volumetric breast density measurement method and to compare it to breast imaging reporting and data system (BI-RADS) breast density categories assigned by two radiologists. A total of 476 full-field digital mammography examinations with standard mediolateral oblique and craniocaudal views were evaluated by two blinded radiologists and BI-RADS density categories were assigned. Using a fully automated software, mean fibroglandular tissue volume, mean breast volume, and mean volumetric breast density were calculated. Based on percentage volumetric breast density, a volumetric density grade was assigned from 1 to 4. The weighted overall kappa was 0.895 (almost perfect agreement) for the two radiologists' BI-RADS density estimates. A statistically significant difference was seen in mean volumetric breast density among the BI-RADS density categories. With increased BI-RADS density category, increase in mean volumetric breast density was also seen (P < 0.001). A significant positive correlation was found between BI-RADS categories and volumetric density grading by fully automated software (ρ = 0.728, P < 0.001 for first radiologist and ρ = 0.725, P < 0.001 for second radiologist). Pairwise estimates of the weighted kappa between Volpara density grade and BI-RADS density category by two observers showed fair agreement (κ = 0.398 and 0.388, respectively). In our study, a good correlation was seen between density grading using fully automated volumetric method and density grading using BI-RADS density categories assigned by the two radiologists. Thus, the fully automated volumetric method may be used to quantify breast density on routine mammography. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  6. Density estimation in wildlife surveys

    USGS Publications Warehouse

    Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John

    2004-01-01

    Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.

  7. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  8. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  9. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  10. Toward accurate and precise estimates of lion density.

    PubMed

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and policy decisions. © 2016 Society for Conservation Biology.

  11. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  12. Passive acoustic monitoring of beaked whale densities in the Gulf of Mexico.

    PubMed

    Hildebrand, John A; Baumann-Pickering, Simone; Frasier, Kaitlin E; Trickey, Jennifer S; Merkens, Karlina P; Wiggins, Sean M; McDonald, Mark A; Garrison, Lance P; Harris, Danielle; Marques, Tiago A; Thomas, Len

    2015-11-12

    Beaked whales are deep diving elusive animals, difficult to census with conventional visual surveys. Methods are presented for the density estimation of beaked whales, using passive acoustic monitoring data collected at sites in the Gulf of Mexico (GOM) from the period during and following the Deepwater Horizon oil spill (2010-2013). Beaked whale species detected include: Gervais' (Mesoplodon europaeus), Cuvier's (Ziphius cavirostris), Blainville's (Mesoplodon densirostris) and an unknown species of Mesoplodon sp. (designated as Beaked Whale Gulf - BWG). For Gervais' and Cuvier's beaked whales, we estimated weekly animal density using two methods, one based on the number of echolocation clicks, and another based on the detection of animal groups during 5 min time-bins. Density estimates derived from these two methods were in good general agreement. At two sites in the western GOM, Gervais' beaked whales were present throughout the monitoring period, but Cuvier's beaked whales were present only seasonally, with periods of low density during the summer and higher density in the winter. At an eastern GOM site, both Gervais' and Cuvier's beaked whales had a high density throughout the monitoring period.

  13. Estimating historical snag density in dry forests east of the Cascade Range

    Treesearch

    Richy J. Harrod; William L. Gaines; William E. Hartl; Ann. Camp

    1998-01-01

    Estimating snag densities in pre-European settlement landscapes (i.e., historical conditions) provides land managers with baseline information for comparing current snag densities. We propose a method for determining historical snag densities in the dry forests east of the Cascade Range. Basal area increase was calculated from tree ring measurements of old ponderosa...

  14. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  15. A citizen science based survey method for estimating the density of urban carnivores.

    PubMed

    Scott, Dawn M; Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W; Mill, Aileen C; Smith, Graham C; Tolhurst, Bryony A

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980's. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on species traits meeting particular criteria, and on resident responsiveness.

  16. A citizen science based survey method for estimating the density of urban carnivores

    PubMed Central

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on species traits meeting particular criteria, and on resident responsiveness. PMID:29787598

  17. Estimating Small-Body Gravity Field from Shape Model and Navigation Data

    NASA Technical Reports Server (NTRS)

    Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam

    2008-01-01

    This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.

  18. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  19. MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples

    EPA Science Inventory

    DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from ...

  20. A simple method for estimating the size of nuclei on fractal surfaces

    NASA Astrophysics Data System (ADS)

    Zeng, Qiang

    2017-10-01

    Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.

  1. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  2. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus.

    PubMed

    Goldberg, Joshua F; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L Scott; Wangchuk, Tshewang R; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010-2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the "true" explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25-15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest.

  3. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus

    PubMed Central

    Goldberg, Joshua F.; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L. Scott; Wangchuk, Tshewang R.; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010–2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the “true” explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25–15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest. PMID:26536231

  4. Technical Note: Cortical thickness and density estimation from clinical CT using a prior thickness-density relationship

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, Ludovic, E-mail: ludohumberto@gmail.com; Hazrati Marangalou, Javad; Rietbergen, Bert van

    Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was usedmore » as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm{sup 3}) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm{sup 3}), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm{sup 3}) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm{sup 3}). A trend for the cortical thickness and density estimation errors to increase with voxel size was observed and was more pronounced for thin cortices. Using clinical CT data for 19 of the 23 samples, mean errors of 0.18 ± 0.24 mm for the cortical thickness and 15 ± 106 mg/cm{sup 3} for the density were found. The case-control study showed that osteoporotic patients had a thinner cortex and a lower cortical density, with average differences of −0.8 mm and −58.6 mg/cm{sup 3} at the proximal femur in comparison with age-matched controls (p-value < 0.001). Conclusions: This method might be a promising approach for the quantification of cortical bone thickness and density using clinical routine imaging techniques. Future work will concentrate on investigating how this approach can improve the estimation of mechanical strength of bony structures, the prevention of fracture, and the management of osteoporosis.« less

  5. Computation of mass-density images from x-ray refraction-angle images.

    PubMed

    Wernick, Miles N; Yang, Yongyi; Mondal, Indrasis; Chapman, Dean; Hasnah, Moumen; Parham, Christopher; Pisano, Etta; Zhong, Zhong

    2006-04-07

    In this paper, we investigate the possibility of computing quantitatively accurate images of mass density variations in soft tissue. This is a challenging task, because density variations in soft tissue, such as the breast, can be very subtle. Beginning from an image of refraction angle created by either diffraction-enhanced imaging (DEI) or multiple-image radiography (MIR), we estimate the mass-density image using a constrained least squares (CLS) method. The CLS algorithm yields accurate density estimates while effectively suppressing noise. Our method improves on an analytical method proposed by Hasnah et al (2005 Med. Phys. 32 549-52), which can produce significant artefacts when even a modest level of noise is present. We present a quantitative evaluation study to determine the accuracy with which mass density can be determined in the presence of noise. Based on computer simulations, we find that the mass-density estimation error can be as low as a few per cent for typical density variations found in the breast. Example images computed from less-noisy real data are also shown to illustrate the feasibility of the technique. We anticipate that density imaging may have application in assessment of water content of cartilage resulting from osteoarthritis, in evaluation of bone density, and in mammographic interpretation.

  6. Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

    2014-03-01

    Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

  7. Evaluation of trapping-web designs

    USGS Publications Warehouse

    Lukacs, P.M.; Anderson, D.R.; Burnham, K.P.

    2005-01-01

    The trapping web is a method for estimating the density and abundance of animal populations. A Monte Carlo simulation study is performed to explore performance of the trapping web for estimating animal density under a variety of web designs and animal behaviours. The trapping performs well when animals have home ranges, even if the home ranges are large relative to trap spacing. Webs should contain at least 90 traps. Trapping should continue for 5-7 occasions. Movement rates have little impact on density estimates when animals are confined to home ranges. Estimation is poor when animals do not have home ranges and movement rates are rapid. The trapping web is useful for estimating the density of animals that are hard to detect and occur at potentially low densities. ?? CSIRO 2005.

  8. Unfolding sphere size distributions with a density estimator based on Tikhonov regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weese, J.; Korat, E.; Maier, D.

    1997-12-01

    This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.

  9. Crowd density estimation based on convolutional neural networks with mixed pooling

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming

    2017-09-01

    Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.

  10. Estimations of population density for selected periods between the Neolithic and AD 1800.

    PubMed

    Zimmermann, Andreas; Hilpert, Johanna; Wendt, Karl Peter

    2009-04-01

    Abstract We describe a combination of methods applied to obtain reliable estimations of population density using archaeological data. The combination is based on a hierarchical model of scale levels. The necessary data and methods used to obtain the results are chosen so as to define transfer functions from one scale level to another. We apply our method to data sets from western Germany that cover early Neolithic, Iron Age, Roman, and Merovingian times as well as historical data from AD 1800. Error margins and natural and historical variability are discussed. Our results for nonstate societies are always lower than conventional estimations compiled from the literature, and we discuss the reasons for this finding. At the end, we compare the calculated local and global population densities with other estimations from different parts of the world.

  11. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  12. Application of Density Estimation Methods to Datasets from a Glider

    DTIC Science & Technology

    2013-09-30

    sperm whales as well as different dolphin species. OBJECTIVES The objective of this research is to extend existing methods for cetacean...Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...a cue counting approach, where a cue has been defined as a clicking event (Küsel et al., 2011), to density estimation from data recorded by single

  13. The Ecology and Acoustic Behavior of Minke Whales in the Hawaiian and other Pacific Islands

    DTIC Science & Technology

    2012-09-30

    the SECR density estimation methods (developed by project partners, Len Thomas, from St. Andrews, and Steve Martin from SPAWAR Systems San Diego...PROJECTS Related projects were conducted by Len Thomas, Vincent Janik, and Steve Martin. These projects are using density estimates derived from...Martin, D.K. Mellinger, S. Jarvis , R.P. Morrissey, C. Ciminello, and N.DiMarzio, 2010. Spatially explicit capture recapture methods to estimate minke

  14. A Balanced Approach to Adaptive Probability Density Estimation.

    PubMed

    Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy

    2017-01-01

    Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  15. Gradient-based stochastic estimation of the density matrix

    NASA Astrophysics Data System (ADS)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  16. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  17. Three statistical models for estimating length of stay.

    PubMed Central

    Selvin, S

    1977-01-01

    The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program. PMID:914532

  18. Three statistical models for estimating length of stay.

    PubMed

    Selvin, S

    1977-01-01

    The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program.

  19. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    USDA-ARS?s Scientific Manuscript database

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  20. A method to estimate statistical errors of properties derived from charge-density modelling

    PubMed Central

    Lecomte, Claude

    2018-01-01

    Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964

  1. Simple Form of MMSE Estimator for Super-Gaussian Prior Densities

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-04-01

    The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.

  2. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  3. Statistics of some atmospheric turbulence records relevant to aircraft response calculations

    NASA Technical Reports Server (NTRS)

    Mark, W. D.; Fischer, R. W.

    1981-01-01

    Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.

  4. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  5. Passive acoustic monitoring of beaked whale densities in the Gulf of Mexico

    PubMed Central

    Hildebrand, John A.; Baumann-Pickering, Simone; Frasier, Kaitlin E.; Trickey, Jennifer S.; Merkens, Karlina P.; Wiggins, Sean M.; McDonald, Mark A.; Garrison, Lance P.; Harris, Danielle; Marques, Tiago A.; Thomas, Len

    2015-01-01

    Beaked whales are deep diving elusive animals, difficult to census with conventional visual surveys. Methods are presented for the density estimation of beaked whales, using passive acoustic monitoring data collected at sites in the Gulf of Mexico (GOM) from the period during and following the Deepwater Horizon oil spill (2010–2013). Beaked whale species detected include: Gervais’ (Mesoplodon europaeus), Cuvier’s (Ziphius cavirostris), Blainville’s (Mesoplodon densirostris) and an unknown species of Mesoplodon sp. (designated as Beaked Whale Gulf — BWG). For Gervais’ and Cuvier’s beaked whales, we estimated weekly animal density using two methods, one based on the number of echolocation clicks, and another based on the detection of animal groups during 5 min time-bins. Density estimates derived from these two methods were in good general agreement. At two sites in the western GOM, Gervais’ beaked whales were present throughout the monitoring period, but Cuvier’s beaked whales were present only seasonally, with periods of low density during the summer and higher density in the winter. At an eastern GOM site, both Gervais’ and Cuvier’s beaked whales had a high density throughout the monitoring period. PMID:26559743

  6. Titan Density Reconstruction Using Radiometric and Cassini Attitude Control Flight Data

    NASA Technical Reports Server (NTRS)

    Andrade, Luis G., Jr.; Burk, Thomas A.

    2015-01-01

    This paper compares three different methods of Titan atmospheric density reconstruction for the Titan 87 Cassini flyby. T87 was a unique flyby that provided independent Doppler radiometric measurements on the ground throughout the flyby including at Titan closest approach. At the same time, the onboard accelerometer provided an independent estimate of atmospheric drag force and density during the flyby. These results are compared with the normal method of reconstructing atmospheric density using thruster on-time and angular momentum accumulation. Differences between the estimates are analyzed and a possible explanation for the differences is evaluated.

  7. A comparative study of volumetric breast density estimation in digital mammography and magnetic resonance imaging: results from a high-risk population

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Xing, Ye; Bakic, Predrag R.; Conant, Emily F.; Maidment, Andrew D. A.

    2010-03-01

    We performed a study to compare methods for volumetric breast density estimation in digital mammography (DM) and magnetic resonance imaging (MRI) for a high-risk population of women. DM and MRI images of the unaffected breast from 32 women with recently detected abnormalities and/or previously diagnosed breast cancer (age range 31-78 yrs, mean 50.3 yrs) were retrospectively analyzed. DM images were analyzed using QuantraTM (Hologic Inc). The MRI images were analyzed using a fuzzy-C-means segmentation algorithm on the T1 map. Both methods were compared to Cumulus (Univ. Toronto). Volumetric breast density estimates from DM and MRI are highly correlated (r=0.90, p<=0.001). The correlation between the volumetric and the area-based density measures is lower and depends on the training background of the Cumulus software user (r=0.73-84, p<=0.001). In terms of absolute values, MRI provides the lowest volumetric estimates (mean=14.63%), followed by the DM volumetric (mean=22.72%) and area-based measures (mean=29.35%). The MRI estimates of the fibroglandular volume are statistically significantly lower than the DM estimates for women with very low-density breasts (p<=0.001). We attribute these differences to potential partial volume effects in MRI and differences in the computational aspects of the image analysis methods in MRI and DM. The good correlation between the volumetric and the area-based measures, shown to correlate with breast cancer risk, suggests that both DM and MRI volumetric breast density measures can aid in breast cancer risk assessment. Further work is underway to fully-investigate the association between volumetric breast density measures and breast cancer risk.

  8. Numerical simulation of inductive method for determining spatial distribution of critical current density

    NASA Astrophysics Data System (ADS)

    Kamitani, A.; Takayama, T.; Tanaka, A.; Ikuno, S.

    2010-11-01

    The inductive method for measuring the critical current density jC in a high-temperature superconducting (HTS) thin film has been investigated numerically. In order to simulate the method, a non-axisymmetric numerical code has been developed for analyzing the time evolution of the shielding current density. In the code, the governing equation of the shielding current density is spatially discretized with the finite element method and the resulting first-order ordinary differential system is solved by using the 5th-order Runge-Kutta method with an adaptive step-size control algorithm. By using the code, the threshold current IT is evaluated for various positions of a coil. The results of computations show that, near a film edge, the accuracy of the estimating formula for jC is remarkably degraded. Moreover, even the proportional relationship between jC and IT will be lost there. Hence, the critical current density near a film edge cannot be estimated by using the inductive method.

  9. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys

    PubMed Central

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov–Malyshev–Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  10. Bird population density estimated from acoustic signals

    USGS Publications Warehouse

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.

  11. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  12. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  13. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  14. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  15. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  16. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  17. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  18. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  19. Fast clustering using adaptive density peak detection.

    PubMed

    Wang, Xiao-Feng; Xu, Yifan

    2017-12-01

    Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.

  20. Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods

    Treesearch

    Zhaodi Guo; Jingyun Fang; Yude Pan; Richard Birdsey

    2010-01-01

    Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...

  1. Measuring atmospheric density using GPS-LEO tracking data

    NASA Astrophysics Data System (ADS)

    Kuang, D.; Desai, S.; Sibthorpe, A.; Pi, X.

    2014-01-01

    We present a method to estimate the total neutral atmospheric density from precise orbit determination of Low Earth Orbit (LEO) satellites. We derive the total atmospheric density by determining the drag force acting on the LEOs through centimeter-level reduced-dynamic precise orbit determination (POD) using onboard Global Positioning System (GPS) tracking data. The precision of the estimated drag accelerations is assessed using various metrics, including differences between estimated along-track accelerations from consecutive 30-h POD solutions which overlap by 6 h, comparison of the resulting accelerations with accelerometer measurements, and comparison against an existing atmospheric density model, DTM-2000. We apply the method to GPS tracking data from CHAMP, GRACE, SAC-C, Jason-2, TerraSAR-X and COSMIC satellites, spanning 12 years (2001-2012) and covering orbital heights from 400 km to 1300 km. Errors in the estimates, including those introduced by deficiencies in other modeled forces (such as solar radiation pressure and Earth radiation pressure), are evaluated and the signal and noise levels for each satellite are analyzed. The estimated density data from CHAMP, GRACE, SAC-C and TerraSAR-X are identified as having high signal and low noise levels. These data all have high correlations with anominal atmospheric density model and show common features in relative residuals with respect to the nominal model in related parameter space. On the contrary, the estimated density data from COSMIC and Jason-2 show errors larger than the actual signal at corresponding altitudes thus having little practical value for this study. The results demonstrate that this method is applicable to data from a variety of missions and can provide useful total neutral density measurements for atmospheric study up to altitude as high as 715 km, with precision and resolution between those derived from traditional special orbital perturbation analysis and those obtained from onboard accelerometers.

  2. A log-linear model approach to estimation of population size using the line-transect sampling method

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  3. Camera traps and activity signs to estimate wild boar density and derive abundance indices.

    PubMed

    Massei, Giovanna; Coats, Julia; Lambert, Mark Simon; Pietravalle, Stephane; Gill, Robin; Cowan, Dave

    2018-04-01

    Populations of wild boar and feral pigs are increasing worldwide, in parallel with their significant environmental and economic impact. Reliable methods of monitoring trends and estimating abundance are needed to measure the effects of interventions on population size. The main aims of this study, carried out in five English woodlands were: (i) to compare wild boar abundance indices obtained from camera trap surveys and from activity signs; and (ii) to assess the precision of density estimates in relation to different densities of camera traps. For each woodland, we calculated a passive activity index (PAI) based on camera trap surveys, rooting activity and wild boar trails on transects, and estimated absolute densities based on camera trap surveys. PAIs obtained using different methods showed similar patterns. We found significant between-year differences in abundance of wild boar using PAIs based on camera trap surveys and on trails on transects, but not on signs of rooting on transects. The density of wild boar from camera trap surveys varied between 0.7 and 7 animals/km 2 . Increasing the density of camera traps above nine per km 2 did not increase the precision of the estimate of wild boar density. PAIs based on number of wild boar trails and on camera trap data appear to be more sensitive to changes in population size than PAIs based on signs of rooting. For wild boar densities similar to those recorded in this study, nine camera traps per km 2 are sufficient to estimate the mean density of wild boar. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry.

  4. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  5. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions.

    PubMed

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  6. Empirical methods in the evaluation of estimators

    Treesearch

    Gerald S. Walton; C.J. DeMars; C.J. DeMars

    1973-01-01

    The authors discuss the problem of selecting estimators of density and survival by making use of data on a forest-defoliating larva, the spruce budworm. Varlous estimators are compared. The results show that, among the estimators considered, ratio-type estimators are superior in terms of bias and variance. The methods used in making comparisons, particularly simulation...

  7. Multivariate Density Estimation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1983-01-01

    Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.

  8. 3D depth-to-basement and density contrast estimates using gravity and borehole data

    NASA Astrophysics Data System (ADS)

    Barbosa, V. C.; Martins, C. M.; Silva, J. B.

    2009-05-01

    We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.

  9. Estimates of evapotranspiration in alkaline scrub and meadow communities of Owens Valley, California, using the Bowen-ratio, eddy-correlation, and Penman-combination methods

    USGS Publications Warehouse

    Duell, L. F. W.

    1988-01-01

    In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared to other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy budget measurements. Penman-combination potential ET estimates were determined to be unusable because they overestimated actual ET. Modification in the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 300 mm at a low-density scrub site to 1,100 mm at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied. (Author 's abstract)

  10. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less

  11. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.

    PubMed

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

  12. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  13. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  14. Impact of density information on Rayleigh surface wave inversion results

    NASA Astrophysics Data System (ADS)

    Ivanov, Julian; Tsoflias, Georgios; Miller, Richard D.; Peterie, Shelby; Morton, Sarah; Xia, Jianghai

    2016-12-01

    We assessed the impact of density on the estimation of inverted shear-wave velocity (Vs) using the multi-channel analysis of surface waves (MASW) method. We considered the forward modeling theory, evaluated model sensitivity, and tested the effect of density information on the inversion of seismic data acquired in the Arctic. Theoretical review, numerical modeling and inversion of modeled and real data indicated that the density ratios between layers, not the actual density values, impact the determination of surface-wave phase velocities. Application on real data compared surface-wave inversion results using: a) constant density, the most common approach in practice, b) indirect density estimates derived from refraction compressional-wave velocity observations, and c) from direct density measurements in a borehole. The use of indirect density estimates reduced the final shear-wave velocity (Vs) results typically by 6-7% and the use of densities from a borehole reduced the final Vs estimates by 10-11% compared to those from assumed constant density. In addition to the improved absolute Vs accuracy, the resulting overall Vs changes were unevenly distributed laterally when viewed on a 2-D section leading to an overall Vs model structure that was more representative of the subsurface environment. It was observed that the use of constant density instead of increasing density with depth not only can lead to Vs overestimation but it can also create inaccurate model structures, such as a low-velocity layer. Thus, optimal Vs estimations can be best achieved using field estimates of subsurface density ratios.

  15. Seasonal Variability in Global Eddy Diffusion and the Effect on Thermospheric Neutral Density

    NASA Astrophysics Data System (ADS)

    Pilinski, M.; Crowley, G.

    2014-12-01

    We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time between January 2004 and January 2008 were estimated from residuals of neutral density measurements made by the CHallenging Minisatellite Payload (CHAMP) and simulations made using the Thermosphere Ionosphere Mesosphere Electrodynamics - Global Circulation Model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy-diffusivity models. The eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the RMS difference between the TIME-GCM model and density data from a variety of satellites is reduced by an average of 5%. This result, indicates that global thermospheric density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates how eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are some limitations of this method, which are discussed, including that the latitude-dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion consistent with diffusion observations made by other techniques.

  16. Seasonal variability in global eddy diffusion and the effect on neutral density

    NASA Astrophysics Data System (ADS)

    Pilinski, M. D.; Crowley, G.

    2015-04-01

    We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time were estimated from residuals of neutral density measurements made by the Challenging Minisatellite Payload (CHAMP) and simulations made using the thermosphere-ionosphere-mesosphere electrodynamics global circulation model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy diffusivity models. Eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the root-mean-square sum for the TIME-GCM model is reduced by an average of 5% when compared to density data from a variety of satellites, indicating that the fidelity of global density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates that eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are limitations to this method, which are discussed, including that the latitude dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion which is also consistent with diffusion observations made by other techniques.

  17. Mark-recapture using tetracycline and genetics reveal record-high bear density

    USGS Publications Warehouse

    Peacock, E.; Titus, K.; Garshelis, D.L.; Peacock, M.M.; Kuc, M.

    2011-01-01

    We used tetracycline biomarking, augmented with genetic methods to estimate the size of an American black bear (Ursus americanus) population on an island in Southeast Alaska. We marked 132 and 189 bears that consumed remote, tetracycline-laced baits in 2 different years, respectively, and observed 39 marks in 692 bone samples subsequently collected from hunters. We genetically analyzed hair samples from bait sites to determine the sex of marked bears, facilitating derivation of sex-specific population estimates. We obtained harvest samples from beyond the study area to correct for emigration. We estimated a density of 155 independent bears/100 km2, which is equivalent to the highest recorded for this species. This high density appears to be maintained by abundant, accessible natural food. Our population estimate (approx. 1,000 bears) could be used as a baseline and to set hunting quotas. The refined biomarking method for abundance estimation is a useful alternative where physical captures or DNA-based estimates are precluded by cost or logistics. Copyright ?? 2011 The Wildlife Society.

  18. A method to estimate the population level of Aceria litchii (Prostigmata: Eriophyidae) and a study of the population dynamics of this species and its predators on litchi trees in southern Brazil.

    PubMed

    De Azevedo, Letícia Henrique; Maeda, Enzo Yuji; Inomoto, Mário Massayuki; De Moraes, Gilberto José

    2014-02-01

    Litchi (Litchi chinensis Sonnerat) is native to Southeast Asia, where most of the world cultivation of this crop is done. Its commercial cultivation in Brazil is recent and concentrated in the state of São Paulo. This crop has been severely damaged in Asia and Brazil by the litchi erineum mite, Aceria litchii (Keifer) (Eriophyidae). The objectives of this study were the adaptation of a method to estimate the density of A. litchii, an evaluation of the population dynamics of this pest and of its associated predators in the state of São Paulo, and an estimation of its injury levels to litchi trees. To estimate the density of A. litchii, an adaptation of a method commonly used to evaluate nematode densities in plant roots was performed. This method was shown to be adequate for the estimation of the number of A. litchii, and it might also be useful for similar evaluations of other erineum forming mites. Field samples to determine the pest population dynamics were collected monthly from August 2011 to July 2012. Sampled leaves were examined under a stereomicroscope for removal of predators and subsequent extraction ofA. litchii by the adapted method. A. litchii reached maximum densities in November 2011 and June 2012, being found at low densities between January and March 2012. The pattern of variation of A. litchii injury levels was similar to that of the density of A. litchii. The main predatory mite co-occurring with A. litchii was the phytoseiid Phytoseius intermedius Evans and McFarlane. However, high injury levels due toA. litchii suggest that the predator was unable to prevent visible damages to the trees, indicating that control activities should be adopted by growers.

  19. Spatial Estimation of Sub-Hour Global Horizontal Irradiance Based on Official Observations and Remote Sensors

    PubMed Central

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-01-01

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102

  20. Spatial estimation of sub-hour Global Horizontal Irradiance based on official observations and remote sensors.

    PubMed

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-04-11

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).

  1. Influence of the volume and density functions within geometric models for estimating trunk inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A

    2010-02-01

    The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).

  2. Variability of dental cone beam CT grey values for density estimations

    PubMed Central

    Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K

    2013-01-01

    Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537

  3. A Simultaneous Density-Integral System for Estimating Stem Profile and Biomass: Slash Pine and Willow Oak

    Treesearch

    Bernard R. Parresol; Charles E. Thomas

    1996-01-01

    In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...

  4. Inferential determination of various properties of a gas mixture

    DOEpatents

    Morrow, Thomas B.; Behring, II, Kendricks A.

    2007-03-27

    Methods for inferentially determining various properties of a gas mixture, when the speed of sound in the gas is known at an arbitrary temperature and pressure. The method can be applied to natural gas mixtures, where the known parameters are the sound speed, temperature, pressure, and concentrations of any dilute components of the gas. The method uses a set of reference gases and their calculated density and speed of sound values to estimate the density of the subject gas. Additional calculations can be made to estimate the molecular weight of the subject gas, which can then be used as the basis for heating value calculations. The method may also be applied to inferentially determine density and molecular weight for gas mixtures other than natural gases.

  5. A photometric method for the estimation of the oil yield of oil shale

    USGS Publications Warehouse

    Cuttitta, Frank

    1951-01-01

    A method is presented for the distillation and photometric estimation of the oil yield of oil-bearing shales. The oil shale is distilled in a closed test tube and the oil extracted with toluene. The optical density of the toluene extract is used in the estimation of oil content and is converted to percentage of oil by reference to a standard curve. This curve is obtained by relating the oil yields determined by the Fischer assay method to the optical density of the toluene extract of the oil evolved by the new procedure. The new method gives results similar to those obtained by the Fischer assay method in a much shorter time. The applicability of the new method to oil-bearing shale and phosphatic shale has been tested.

  6. Wood density-moisture profiles in old-growth Douglas-fir and western hemlock.

    Treesearch

    W.Y. Pong; Dale R. Waddell; Lambert Michael B.

    1986-01-01

    Accurate estimation of the weight of each load of logs is necessary for safe and efficient aerial logging operations. The prediction of green density (lb/ft3) as a function of height is a critical element in the accurate estimation of tree bole and log weights. Two sampling methods, disk and increment core (Bergstrom xylodensimeter), were used to measure the density-...

  7. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data

    PubMed Central

    Broekhuis, Femke; Gopalaswamy, Arjun M.

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614

  8. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data.

    PubMed

    Broekhuis, Femke; Gopalaswamy, Arjun M

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  9. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  10. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  11. DENSITY: software for analysing capture-recapture data from passive detector arrays

    USGS Publications Warehouse

    Efford, M.G.; Dawson, D.K.; Robbins, C.S.

    2004-01-01

    A general computer-intensive method is described for fitting spatial detection functions to capture-recapture data from arrays of passive detectors such as live traps and mist nets. The method is used to estimate the population density of 10 species of breeding birds sampled by mist-netting in deciduous forest at Patuxent Research Refuge, Laurel, Maryland, U.S.A., from 1961 to 1972. Total density (9.9 ? 0.6 ha-1 mean ? SE) appeared to decline over time (slope -0.41 ? 0.15 ha-1y-1). The mean precision of annual estimates for all 10 species pooled was acceptable (CV(D) = 14%). Spatial analysis of closed-population capture-recapture data highlighted deficiencies in non-spatial methodologies. For example, effective trapping area cannot be assumed constant when detection probability is variable. Simulation may be used to evaluate alternative designs for mist net arrays where density estimation is a study goal.

  12. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  13. Rapid assessment of rice seed availability for wildlife in harvested fields

    USGS Publications Warehouse

    Halstead, B.J.; Miller, M.R.; Casazza, Michael L.; Coates, P.S.; Farinha, M.A.; Benjamin, Gustafson K.; Yee, J.L.; Fleskes, J.P.

    2011-01-01

    Rice seed remaining in commercial fields after harvest (waste rice) is a critical food resource for wintering waterfowl in rice-growing regions of North America. Accurate and precise estimates of the seed mass density of waste rice are essential for planning waterfowl wintering habitat extents and management. In the Sacramento Valley of California, USA, the existing method for obtaining estimates of availability of waste rice in harvested fields produces relatively precise estimates, but the labor-, time-, and machineryintensive process is not practical for routine assessments needed to examine long-term trends in waste rice availability. We tested several experimental methods designed to rapidly derive estimates that would not be burdened with disadvantages of the existing method. We first conducted a simulation study of the efficiency of each method and then conducted field tests. For each approach, methods did not vary in root mean squared error, although some methods did exhibit bias for both simulations and field tests. Methods also varied substantially in the time to conduct each sample and in the number of samples required to detect a standard trend. Overall, modified line-intercept methods performed well for estimating the density of rice seeds. Waste rice in the straw, although not measured directly, can be accounted for by a positive relationship with density of rice on the ground. Rapid assessment of food availability is a useful tool to help waterfowl managers establish and implement wetland restoration and agricultural habitat-enhancement goals for wintering waterfowl. ?? 2011 The Wildlife Society.

  14. Thermospheric density estimation from SLR observations of LEO satellites - A case study with the ANDE-Pollux satellite

    NASA Astrophysics Data System (ADS)

    Blossfeld, M.; Schmidt, M.; Erdogan, E.

    2016-12-01

    The thermospheric neutral density plays a crucial role within the equation of motion of Earth orbiting objects since drag, lift or side forces are one of the largest non-gravitational perturbations acting on the satellite. Precise Orbit Determination (POD) methods can be used to estimate thermospheric density variations from measured orbit determinations. One method which provides highly accurate measurements of the satellite position is Satellite Laser Ranging (SLR). Within the POD process, scaling factors are estimated frequently. These scaling factors can be either used for the scaling of the so called satellite-specific drag (ballistic) coefficients or the integrated thermospheric neutral density. We present a method for analytically model the drag coefficient based on a couple of physical assumptions and key parameters. In this paper, we investigate the possibility to use SLR observations to the very low Earth orbiting satellite ANDE-Pollux (approximately at 350km altitude) to determine scaling factors for different a priori thermospheric density models. We perform a POD for ANDE-Pollux covering 49 days between August 2009 and September 2009 which means the time span containing the largest number of observations during the short lifetime of the satellite. Finally, we compare the obtained scaled thermospheric densities w.r.t. each other

  15. Estimates of evapotranspiration in alkaline scrub and meadow communities of Owens Valley, California, using the Bowen-ratio, eddy-correlation, and penman-combination methods

    USGS Publications Warehouse

    Duell, Lowell F. W.

    1990-01-01

    In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared with other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy-budget measurements. Penman-combination potential-ET estimates were determined to be unusable because they overestimated actual ET. Modification of the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods described in this report may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix of this report. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 301 millimeters at a low-density scrub site to 1,137 millimeters at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied.

  16. Density estimation of Yangtze finless porpoises using passive acoustic sensors and automated click train detection.

    PubMed

    Kimura, Satoko; Akamatsu, Tomonari; Li, Songhai; Dong, Shouyue; Dong, Lijun; Wang, Kexiong; Wang, Ding; Arai, Nobuaki

    2010-09-01

    A method is presented to estimate the density of finless porpoises using stationed passive acoustic monitoring. The number of click trains detected by stereo acoustic data loggers (A-tag) was converted to an estimate of the density of porpoises. First, an automated off-line filter was developed to detect a click train among noise, and the detection and false-alarm rates were calculated. Second, a density estimation model was proposed. The cue-production rate was measured by biologging experiments. The probability of detecting a cue and the area size were calculated from the source level, beam patterns, and a sound-propagation model. The effect of group size on the cue-detection rate was examined. Third, the proposed model was applied to estimate the density of finless porpoises at four locations from the Yangtze River to the inside of Poyang Lake. The estimated mean density of porpoises in a day decreased from the main stream to the lake. Long-term monitoring during 466 days from June 2007 to May 2009 showed variation in the density 0-4.79. However, the density was fewer than 1 porpoise/km(2) during 94% of the period. These results suggest a potential gap and seasonal migration of the population in the bottleneck of Poyang Lake.

  17. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  18. Estimating abundance of mountain lions from unstructured spatial sampling

    USGS Publications Warehouse

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.

  19. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  20. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  1. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  2. Motor unit number estimation based on high-density surface electromyography decomposition.

    PubMed

    Peng, Yun; He, Jinbao; Yao, Bo; Li, Sheng; Zhou, Ping; Zhang, Yingchun

    2016-09-01

    To advance the motor unit number estimation (MUNE) technique using high density surface electromyography (EMG) decomposition. The K-means clustering convolution kernel compensation algorithm was employed to detect the single motor unit potentials (SMUPs) from high-density surface EMG recordings of the biceps brachii muscles in eight healthy subjects. Contraction forces were controlled at 10%, 20% and 30% of the maximal voluntary contraction (MVC). Achieved MUNE results and the representativeness of the SMUP pools were evaluated using a high-density weighted-average method. Mean numbers of motor units were estimated as 288±132, 155±87, 107±99 and 132±61 by using the developed new MUNE at 10%, 20%, 30% and 10-30% MVCs, respectively. Over 20 SMUPs were obtained at each contraction level, and the mean residual variances were lower than 10%. The new MUNE method allows a convenient and non-invasive collection of a large size of SMUP pool with great representativeness. It provides a useful tool for estimating the motor unit number of proximal muscles. The present new MUNE method successfully avoids the use of intramuscular electrodes or multiple electrical stimuli which is required in currently available MUNE techniques; as such the new MUNE method can minimize patient discomfort for MUNE tests. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data.

    PubMed

    Nakamura, Yoshihiro; Hasegawa, Osamu

    2017-01-01

    With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.

  4. Camera trapping estimates of density and survival of fishers (Martes pennanti)

    Treesearch

    Mark J. Jordan; Reginald H. Barrett; Kathryn L. Purcell

    2011-01-01

    Developing efficient monitoring strategies for species of conservation concern is critical to ensuring their persistence. We have developed a method using camera traps to estimate density and survival in mesocarnivores and tested it on a population of fishers Martes pennanti in an area of approximately 300 km2 of the southern...

  5. Statistical properties of a filtered Poisson process with additive random noise: distributions, correlations and moment estimation

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; E Garcia, O.; Rypdal, M.

    2017-05-01

    Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.

  6. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  7. Method to estimate the electron temperature and neutral density in a plasma from spectroscopic measurements using argon atom and ion collisional-radiative models.

    PubMed

    Sciamma, Ella M; Bengtson, Roger D; Rowan, W L; Keesee, Amy; Lee, Charles A; Berisford, Dan; Lee, Kevin; Gentle, K W

    2008-10-01

    We present a method to infer the electron temperature in argon plasmas using a collisional-radiative model for argon ions and measurements of electron density to interpret absolutely calibrated spectroscopic measurements of argon ion (Ar II) line intensities. The neutral density, and hence the degree of ionization of this plasma, can then be estimated using argon atom (Ar I) line intensities and a collisional-radiative model for argon atoms. This method has been tested for plasmas generated on two different devices at the University of Texas at Austin: the helicon experiment and the helimak experiment. We present results that show good correlation with other measurements in the plasma.

  8. Capture-recapture of white-tailed deer using DNA from fecal pellet-groups

    USGS Publications Warehouse

    Goode, Matthew J; Beaver, Jared T; Muller, Lisa I; Clark, Joseph D.; van Manen, Frank T.; Harper, Craig T; Basinger, P Seth

    2014-01-01

    Traditional methods for estimating white-tailed deer population size and density are affected by behavioral biases, poor detection in densely forested areas, and invalid techniques for estimating effective trapping area. We evaluated a noninvasive method of capture—recapture for white-tailed deer (Odocoileus virginianus) density estimation using DNA extracted from fecal pellets as an individual marker and for gender determination, coupled with a spatial detection function to estimate density (spatially explicit capture—recapture, SECR). We collected pellet groups from 11 to 22 January 2010 at randomly selected sites within a 1-km2 area located on Arnold Air Force Base in Coffee and Franklin counties, Tennessee. We searched 703 10-m radius plots and collected 352 pellet-group samples from 197 plots over five two-day sampling intervals. Using only the freshest pellets we recorded 140 captures of 33 different animals (15M:18F). Male and female densities were 1.9 (SE = 0.8) and 3.8 (SE = 1.3) deer km-2, or a total density of 5.8 deer km-2 (14.9 deer mile-2). Population size was 20.8 (SE = 7.6) over a 360-ha area, and sex ratio was 1.0 M: 2.0 F (SE = 0.71). We found DNA sampling from pellet groups improved deer abundance, density and sex ratio estimates in contiguous landscapes which could be used to track responses to harvest or other management actions.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  10. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  11. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  12. The effect of different methods to compute N on estimates of mixing in stratified flows

    NASA Astrophysics Data System (ADS)

    Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey

    2017-11-01

    The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.

  13. Estimating the densities of benzene-derived explosives using atomic volumes.

    PubMed

    Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka

    2018-02-09

    The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.

  14. Photometric redshift estimation via deep learning. Generalized and pre-classification-less, image based, fully probabilistic redshifts

    NASA Astrophysics Data System (ADS)

    D'Isanto, A.; Polsterer, K. L.

    2018-01-01

    Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.

  15. Density estimation in wildlife surveys

    Treesearch

    Jonathan Bart; Sam Droege; Paul Geissler; Bruce Peterjohn; C. John Ralph

    2004-01-01

    Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a costeffective component of valid...

  16. Shell productivity of the large benthic foraminifer Baculogypsina sphaerulata, based on the population dynamics in a tropical reef environment

    NASA Astrophysics Data System (ADS)

    Fujita, Kazuhiko; Otomaru, Maki; Lopati, Paeniu; Hosono, Takashi; Kayanne, Hajime

    2016-03-01

    Carbonate production by large benthic foraminifers is sometimes comparable to that of corals and coralline algae, and contributes to sedimentation on reef islands and beaches in the tropical Pacific. Population dynamic data, such as population density and size structure (size-frequency distribution), are vital for an accurate estimation of shell production of foraminifers. However, previous production estimates in tropical environments were based on a limited sampling period with no consideration of seasonality. In addition, no comparisons were made of various estimation methods to determine more accurate estimates. Here we present the annual gross shell production rate of Baculogypsina sphaerulata, estimated based on population dynamics studied over a 2-yr period on an ocean reef flat of Funafuti Atoll (Tuvalu, tropical South Pacific). The population density of B. sphaerulata increased from January to March, when northwest winds predominated and the study site was on the leeward side of reef islands, compared to other seasons when southeast trade winds predominated and the study site was on the windward side. This result suggested that wind-driven flows controlled the population density at the study site. The B. sphaerulata population had a relatively stationary size-frequency distribution throughout the study period, indicating no definite intensive reproductive period in the tropical population. Four methods were applied to estimate the annual gross shell production rates of B. sphaerulata. The production rates estimated by three of the four methods (using monthly biomass, life tables and growth increment rates) were in the order of hundreds of g CaCO3 m-2 yr-1 or cm-3 m-2 yr-1, and the simple method using turnover rates overestimated the values. This study suggests that seasonal surveys should be undertaken of population density and size structure as these can produce more accurate estimates of shell productivity of large benthic foraminifers.

  17. Dual energy approach for cone beam artifacts correction

    NASA Astrophysics Data System (ADS)

    Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk

    2017-03-01

    Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.

  18. Density estimates of monarch butterflies overwintering in central Mexico

    PubMed Central

    Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031

  19. Density estimates of monarch butterflies overwintering in central Mexico

    USGS Publications Warehouse

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  20. Atmospheric turbulence profiling with unknown power spectral density

    NASA Astrophysics Data System (ADS)

    Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny

    2018-04-01

    Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.

  1. Ore Reserve Estimation of Saprolite Nickel Using Inverse Distance Method in PIT Block 3A Banggai Area Central Sulawesi

    NASA Astrophysics Data System (ADS)

    Khaidir Noor, Muhammad

    2018-03-01

    Reserve estimation is one of important work in evaluating a mining project. It is estimation of the quality and quantity of the presence of minerals have economic value. Reserve calculation method plays an important role in determining the efficiency in commercial exploration of a deposit. This study was intended to calculate ore reserves contained in the study area especially Pit Block 3A. Nickel ore reserve was estimated by using detailed exploration data, processing by using Surpac 6.2 by Inverse Distance Weight: Squared Power estimation method. Ore estimation result obtained from 30 drilling data was 76453.5 ton of Saprolite with density of 1.5 ton/m3 and COG (Cut Off Grade) Ni ≥ 1.6 %, while overburden data was 112,570.8 tons with waste rock density of 1.2 ton/m3 . Striping Ratio (SR) was 1.47 : 1 smaller than Stripping Ratio ( SR ) were set of 1.60 : 1.

  2. Soil Bulk Density by Soil Type, Land Use and Data Source: Putting the Error in SOC Estimates

    NASA Astrophysics Data System (ADS)

    Wills, S. A.; Rossi, A.; Loecke, T.; Ramcharan, A. M.; Roecker, S.; Mishra, U.; Waltman, S.; Nave, L. E.; Williams, C. O.; Beaudette, D.; Libohova, Z.; Vasilas, L.

    2017-12-01

    An important part of SOC stock and pool assessment is the assessment, estimation, and application of bulk density estimates. The concept of bulk density is relatively simple (the mass of soil in a given volume), the specifics Bulk density can be difficult to measure in soils due to logistical and methodological constraints. While many estimates of SOC pools use legacy data in their estimates, few concerted efforts have been made to assess the process used to convert laboratory carbon concentration measurements and bulk density collection into volumetrically based SOC estimates. The methodologies used are particularly sensitive in wetlands and organic soils with high amounts of carbon and very low bulk densities. We will present an analysis across four database measurements: NCSS - the National Cooperative Soil Survey Characterization dataset, RaCA - the Rapid Carbon Assessment sample dataset, NWCA - the National Wetland Condition Assessment, and ISCN - the International soil Carbon Network. The relationship between bulk density and soil organic carbon will be evaluated by dataset and land use/land cover information. Prediction methods (both regression and machine learning) will be compared and contrasted across datasets and available input information. The assessment and application of bulk density, including modeling, aggregation and error propagation will be evaluated. Finally, recommendations will be made about both the use of new data in soil survey products (such as SSURGO) and the use of that information as legacy data in SOC pool estimates.

  3. Cetacean Density Estimation from Novel Acoustic Datasets by Acoustic Propagation Modeling

    DTIC Science & Technology

    2014-09-30

    hydrophone, to estimate the population density of false killer whales (Pseudorca crassidens) off of the Kona coast of the Island of Hawai’i... killer whale , suffers from interaction with the fisheries industry and its population has been reported to have declined in the past 20 years. Studies...of abundance estimate of false killer whales in Hawai’i through mark recapture methods will provide comparable results to the ones obtained by this

  4. Cetacean Density Estimation from Novel Acoustic Datasets by Acoustic Propagation Modeling

    DTIC Science & Technology

    2013-09-30

    hydrophone, to estimate the population density of false killer whales (Pseudorca crassidens) off of the Kona coast of the Island of Hawai’i. OBJECTIVES...propagation due to the complexities of its environment. Moreover, the target species chosen for the proposed work, the false killer whale , suffers...estimate of false killer whales in Hawai’i through mark recapture methods will provide comparable results to the ones obtained by this project. The ultimate

  5. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  6. Real-time reflectometry measurement validation in H-mode regimes for plasma position control.

    PubMed

    Santos, J; Guimarais, L; Manso, M

    2010-10-01

    It has been shown that in H-mode regimes, reflectometry electron density profiles and an estimate for the density at the separatrix can be jointly used to track the separatrix within the precision required for plasma position control on ITER. We present a method to automatically remove, from the position estimation procedure, measurements performed during collapse and recovery phases of edge localized modes (ELMs). Based on the rejection mechanism, the method also produces an estimate confidence value to be fed to the position feedback controller. Preliminary results show that the method improves the real-time experimental separatrix tracking capabilities and has the potential to eliminate the need for an external online source of ELM event signaling during control feedback operation.

  7. New density estimation methods for charged particle beams with applications to microbunching instability

    NASA Astrophysics Data System (ADS)

    Terzić, Balša; Bassi, Gabriele

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  8. Estimating forest canopy bulk density using six indirect methods

    Treesearch

    Robert E. Keane; Elizabeth D. Reinhardt; Joe Scott; Kathy Gray; James Reardon

    2005-01-01

    Canopy bulk density (CBD) is an important crown characteristic needed to predict crown fire spread, yet it is difficult to measure in the field. Presented here is a comprehensive research effort to evaluate six indirect sampling techniques for estimating CBD. As reference data, detailed crown fuel biomass measurements were taken on each tree within fixed-area plots...

  9. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonneville, Alain H.; Kouzes, Richard T.

    Imaging subsurface geological formations, oil and gas reservoirs, mineral deposits, cavities or magma chambers under active volcanoes has been for many years a major quest of geophysicists and geologists. Since these objects cannot be observed directly, different indirect geophysical methods have been developed. They are all based on variations of certain physical properties of the subsurface that can be detected from the ground surface or from boreholes. Electrical resistivity, seismic wave’s velocities and density are certainly the most used properties. If we look at density, indirect estimates of density distributions are performed currently by seismic reflection methods - since themore » velocity of seismic waves depend also on density - but they are expensive and discontinuous in time. Direct estimates of density are performed using gravimetric data looking at variations of the gravity field induced by the density variations at depth but this is not sufficiently accurate. A new imaging technique using cosmic-ray muon detectors has emerged during the last decade and muon tomography - or muography - promises to provide, for the first time, a complete and precise image of the density distribution in the subsurface. Further, this novel approach has the potential to become a direct, real-time, and low-cost method for monitoring fluid displacement in subsurface reservoirs.« less

  11. A bias-corrected estimator in multiple imputation for missing data.

    PubMed

    Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki

    2018-05-29

    Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Three methods for estimating a range of vehicular interactions

    NASA Astrophysics Data System (ADS)

    Krbálek, Milan; Apeltauer, Jiří; Apeltauer, Tomáš; Szabová, Zuzana

    2018-02-01

    We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.

  13. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    PubMed

    Gupta, Manan; Joshi, Amitabh; Vidya, T N C

    2017-01-01

    Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species.

  14. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators

    PubMed Central

    Joshi, Amitabh; Vidya, T. N. C.

    2017-01-01

    Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species. PMID:28306735

  15. Estimating the mediating effect of different biomarkers on the relation of alcohol consumption with the risk of type 2 diabetes.

    PubMed

    Beulens, Joline W J; van der Schouw, Yvonne T; Moons, Karel G M; Boshuizen, Hendriek C; van der A, Daphne L; Groenwold, Rolf H H

    2013-04-01

    Moderate alcohol consumption is associated with a reduced type 2 diabetes risk, but the biomarkers that explain this relation are unknown. The most commonly used method to estimate the proportion explained by a biomarker is the difference method. However, influence of alcohol-biomarker interaction on its results is unclear. G-estimation method is proposed to accurately assess proportion explained, but how this method compares with the difference method is unknown. In a case-cohort study of 2498 controls and 919 incident diabetes cases, we estimated the proportion explained by different biomarkers on the relation between alcohol consumption and diabetes using the difference method and sequential G-estimation method. Using the difference method, high-density lipoprotein cholesterol explained the relation between alcohol and diabetes by 78% (95% confidence interval [CI], 41-243), whereas high-sensitivity C-reactive protein (-7.5%; -36.4 to 1.8) or blood pressure (-6.9; -26.3 to -0.6) did not explain the relation. Interaction between alcohol and liver enzymes led to bias in proportion explained with different outcomes for different levels of liver enzymes. G-estimation method showed comparable results, but proportions explained were lower. The relation between alcohol consumption and diabetes may be largely explained by increased high-density lipoprotein cholesterol but not by other biomarkers. Ignoring exposure-mediator interactions may result in bias. The difference and G-estimation methods provide similar results. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Standardization of enterococci density estimates by EPA qPCR methods and comparison of beach action value exceedances in river waters with culture methods

    EPA Science Inventory

    The U.S.EPA has published recommendations for calibrator cell equivalent (CCE) densities of enterococci in recreational waters determined by a qPCR method in its 2012 Recreational Water Quality Criteria (RWQC). The CCE quantification unit stems from the calibration model used to ...

  17. A generalised random encounter model for estimating animal density with remote sensor data.

    PubMed

    Lucas, Tim C D; Moorcroft, Elizabeth A; Freeman, Robin; Rowcliffe, J Marcus; Jones, Kate E

    2015-05-01

    Wildlife monitoring technology is advancing rapidly and the use of remote sensors such as camera traps and acoustic detectors is becoming common in both the terrestrial and marine environments. Current methods to estimate abundance or density require individual recognition of animals or knowing the distance of the animal from the sensor, which is often difficult. A method without these requirements, the random encounter model (REM), has been successfully applied to estimate animal densities from count data generated from camera traps. However, count data from acoustic detectors do not fit the assumptions of the REM due to the directionality of animal signals.We developed a generalised REM (gREM), to estimate absolute animal density from count data from both camera traps and acoustic detectors. We derived the gREM for different combinations of sensor detection widths and animal signal widths (a measure of directionality). We tested the accuracy and precision of this model using simulations of different combinations of sensor detection widths and animal signal widths, number of captures and models of animal movement.We find that the gREM produces accurate estimates of absolute animal density for all combinations of sensor detection widths and animal signal widths. However, larger sensor detection and animal signal widths were found to be more precise. While the model is accurate for all capture efforts tested, the precision of the estimate increases with the number of captures. We found no effect of different animal movement models on the accuracy and precision of the gREM.We conclude that the gREM provides an effective method to estimate absolute animal densities from remote sensor count data over a range of sensor and animal signal widths. The gREM is applicable for count data obtained in both marine and terrestrial environments, visually or acoustically (e.g. big cats, sharks, birds, echolocating bats and cetaceans). As sensors such as camera traps and acoustic detectors become more ubiquitous, the gREM will be increasingly useful for monitoring unmarked animal populations across broad spatial, temporal and taxonomic scales.

  18. Inferring animal densities from tracking data using Markov chains.

    PubMed

    Whitehead, Hal; Jonsen, Ian D

    2013-01-01

    The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  19. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  20. Determining mutation density using Restriction Enzyme Sequence Comparative Analysis (RESCAN)

    USDA-ARS?s Scientific Manuscript database

    The average mutation density of a mutant population is a major consideration when developing resources for the efficient, cost-effective implementation of reverse genetics methods such as Targeting of Induced Local Lesions in Genomes (TILLING). Reliable estimates of mutation density can be achieved ...

  1. Inverse modeling of Asian (222)Rn flux using surface air (222)Rn concentration.

    PubMed

    Hirao, Shigekazu; Yamazawa, Hiromi; Moriizumi, Jun

    2010-11-01

    When used with an atmospheric transport model, the (222)Rn flux distribution estimated in our previous study using soil transport theory caused underestimation of atmospheric (222)Rn concentrations as compared with measurements in East Asia. In this study, we applied a Bayesian synthesis inverse method to produce revised estimates of the annual (222)Rn flux density in Asia by using atmospheric (222)Rn concentrations measured at seven sites in East Asia. The Bayesian synthesis inverse method requires a prior estimate of the flux distribution and its uncertainties. The atmospheric transport model MM5/HIRAT and our previous estimate of the (222)Rn flux distribution as the prior value were used to generate new flux estimates for the eastern half of the Eurasian continent dividing into 10 regions. The (222)Rn flux densities estimated using the Bayesian inversion technique were generally higher than the prior flux densities. The area-weighted average (222)Rn flux density for Asia was estimated to be 33.0 mBq m(-2) s(-1), which is substantially higher than the prior value (16.7 mBq m(-2) s(-1)). The estimated (222)Rn flux densities decrease with increasing latitude as follows: Southeast Asia (36.7 mBq m(-2) s(-1)); East Asia (28.6 mBq m(-2) s(-1)) including China, Korean Peninsula and Japan; and Siberia (14.1 mBq m(-2) s(-1)). Increase of the newly estimated fluxes in Southeast Asia, China, Japan, and the southern part of Eastern Siberia from the prior ones contributed most significantly to improved agreement of the model-calculated concentrations with the atmospheric measurements. The sensitivity analysis of prior flux errors and effects of locally exhaled (222)Rn showed that the estimated fluxes in Northern and Central China, Korea, Japan, and the southern part of Eastern Siberia were robust, but that in Central Asia had a large uncertainty.

  2. Estimating snowpack density from Albedo measurement

    Treesearch

    James L. Smith; Howard G. Halverson

    1979-01-01

    Snow is a major source of water in Western United States. Data on snow depth and average snowpack density are used in mathematical models to predict water supply. In California, about 75 percent of the snow survey sites above 2750-meter elevation now used to collect data are in statutory wilderness areas. There is need for a method of estimating the water content of a...

  3. Lodgepole pine bole wood density 1 and 11 years after felling in central Montana

    Treesearch

    Duncan C. Lutes; Colin C. Hardy

    2013-01-01

    Estimates of large dead and down woody material biomass are used for evaluating ecological processes and making ecological assessments, such as for nutrient cycling, wildlife habitat, fire effects, and climate change science. Many methods are used to assess the abundance (volume) of woody material, which ultimately require an estimate of wood density to convert volume...

  4. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Robert N; White, Devin A; Urban, Marie L

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less

  5. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  6. Critical thresholds in sea lice epidemics: evidence, sensitivity and subcritical estimation

    PubMed Central

    Frazer, L. Neil; Morton, Alexandra; Krkošek, Martin

    2012-01-01

    Host density thresholds are a fundamental component of the population dynamics of pathogens, but empirical evidence and estimates are lacking. We studied host density thresholds in the dynamics of ectoparasitic sea lice (Lepeophtheirus salmonis) on salmon farms. Empirical examples include a 1994 epidemic in Atlantic Canada and a 2001 epidemic in Pacific Canada. A mathematical model suggests dynamics of lice are governed by a stable endemic equilibrium until the critical host density threshold drops owing to environmental change, or is exceeded by stocking, causing epidemics that require rapid harvest or treatment. Sensitivity analysis of the critical threshold suggests variation in dependence on biotic parameters and high sensitivity to temperature and salinity. We provide a method for estimating the critical threshold from parasite abundances at subcritical host densities and estimate the critical threshold and transmission coefficient for the two epidemics. Host density thresholds may be a fundamental component of disease dynamics in coastal seas where salmon farming occurs. PMID:22217721

  7. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    NASA Astrophysics Data System (ADS)

    Sorini, D.

    2017-04-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ``light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ~ 0.80 h Mpc-1 and within 10% up to k ~ 0.94 h Mpc-1, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.

  8. Estimation of dislocations density and distribution of dislocations during ECAP-Conform process

    NASA Astrophysics Data System (ADS)

    Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza

    2018-01-01

    Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.

  9. Estimating carbon and nitrogen pools in a forest soil: Influence of soil bulk density methods and rock content

    Treesearch

    Martin F. Jurgensen; Deborah S. Page-Dumroese; Robert E. Brown; Joanne M. Tirocke; Chris A. Miller; James B. Pickens; Min Wang

    2017-01-01

    Soils with high rock content are common in many US forests, and contain large amounts of stored C. Accurate measurements of soil bulk density and rock content are critical for calculating and assessing changes in both C and nutrient pool size, but bulk density sampling methods have limitations and sources of variability. Therefore, we evaluated the use of small-...

  10. Mixed effects modelling for glass category estimation from glass refractive indices.

    PubMed

    Lucy, David; Zadora, Grzegorz

    2011-10-10

    520 Glass fragments were taken from 105 glass items. Each item was either a container, a window, or glass from an automobile. Each of these three classes of use are defined as glass categories. Refractive indexes were measured both before, and after a programme of re-annealing. Because the refractive index of each fragment could not in itself be observed before and after re-annealing, a model based approach was used to estimate the change in refractive index for each glass category. It was found that less complex estimation methods would be equivalent to the full model, and were subsequently used. The change in refractive index was then used to calculate a measure of the evidential value for each item belonging to each glass category. The distributions of refractive index change were considered for each glass category, and it was found that, possibly due to small samples, members of the normal family would not adequately model the refractive index changes within two of the use types considered here. Two alternative approaches to modelling the change in refractive index were used, one employed more established kernel density estimates, the other a newer approach called log-concave estimation. Either method when applied to the change in refractive index was found to give good estimates of glass category, however, on all performance metrics kernel density estimates were found to be slightly better than log-concave estimates, although the estimates from log-concave estimation prossessed properties which had some qualitative appeal not encapsulated in the selected measures of performance. These results and implications of these two methods of estimating probability densities for glass refractive indexes are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Evaluation of methods to estimate lake herring spawner abundance in Lake Superior

    USGS Publications Warehouse

    Yule, D.L.; Stockwell, J.D.; Cholwek, G.A.; Evrard, L.M.; Schram, S.; Seider, M.; Symbal, M.

    2006-01-01

    Historically, commercial fishers harvested Lake Superior lake herring Coregonus artedi for their flesh, but recently operators have targeted lake herring for roe. Because no surveys have estimated spawning female abundance, direct estimates of fishing mortality are lacking. The primary objective of this study was to determine the feasibility of using acoustic techniques in combination with midwater trawling to estimate spawning female lake herring densities in a Lake Superior statistical grid (i.e., a 10′ latitude × 10′ longitude area over which annual commercial harvest statistics are compiled). Midwater trawling showed that mature female lake herring were largely pelagic during the night in late November, accounting for 94.5% of all fish caught exceeding 250 mm total length. When calculating acoustic estimates of mature female lake herring, we excluded backscattering from smaller pelagic fishes like immature lake herring and rainbow smelt Osmerus mordax by applying an empirically derived threshold of −35.6 dB. We estimated the average density of mature females in statistical grid 1409 at 13.3 fish/ha and the total number of spawning females at 227,600 (95% confidence interval = 172,500–282,700). Using information on mature female densities, size structure, and fecundity, we estimate that females deposited 3.027 billion (109) eggs in grid 1409 (95% confidence interval = 2.356–3.778 billion). The relative estimation error of the mature female density estimate derived using a geostatistical model—based approach was low (12.3%), suggesting that the employed method was robust. Fishing mortality rates of all mature females and their eggs were estimated at 2.3% and 3.8%, respectively. The techniques described for enumerating spawning female lake herring could be used to develop a more accurate stock–recruitment model for Lake Superior lake herring.

  12. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  13. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006

    PubMed Central

    Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.

    2016-01-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418

  14. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  15. Hybrid reconstruction of quantum density matrix: when low-rank meets sparsity

    NASA Astrophysics Data System (ADS)

    Li, Kezhi; Zheng, Kai; Yang, Jingbei; Cong, Shuang; Liu, Xiaomei; Li, Zhaokai

    2017-12-01

    Both the mathematical theory and experiments have verified that the quantum state tomography based on compressive sensing is an efficient framework for the reconstruction of quantum density states. In recent physical experiments, we found that many unknown density matrices in which people are interested in are low-rank as well as sparse. Bearing this information in mind, in this paper we propose a reconstruction algorithm that combines the low-rank and the sparsity property of density matrices and further theoretically prove that the solution of the optimization function can be, and only be, the true density matrix satisfying the model with overwhelming probability, as long as a necessary number of measurements are allowed. The solver leverages the fixed-point equation technique in which a step-by-step strategy is developed by utilizing an extended soft threshold operator that copes with complex values. Numerical experiments of the density matrix estimation for real nuclear magnetic resonance devices reveal that the proposed method achieves a better accuracy compared to some existing methods. We believe that the proposed method could be leveraged as a generalized approach and widely implemented in the quantum state estimation.

  16. The density of apical cells of dark-grown protonemata of the moss Ceratodon purpureus

    NASA Technical Reports Server (NTRS)

    Schwuchow, J. M.; Kern, V. D.; Wagner, T.; Sack, F. D.

    2000-01-01

    Determinations of plant or algal cell density (cell mass divided by volume) have rarely accounted for the extracellular matrix or shrinkage during isolation. Three techniques were used to indirectly estimate the density of intact apical cells from protonemata of the moss Ceratodon purpureus. First, the volume fraction of each cell component was determined by stereology, and published values for component density were used to extrapolate to the entire cell. Second, protonemal tips were immersed in bovine serum albumin solutions of different densities, and then the equilibrium density was corrected for the mass of the cell wall. Third, apical cell protoplasts were centrifuged in low-osmolarity gradients, and values were corrected for shrinkage during protoplast isolation. Values from centrifugation (1.004 to 1.015 g/cm3) were considerably lower than from other methods (1.046 to 1.085 g/cm3). This work appears to provide the first corrected estimates of the density of any plant cell. It also documents a method for the isolation of protoplasts specifically from apical cells of protonemal filaments.

  17. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  18. Direct sampling for stand density index

    Treesearch

    Mark J. Ducey; Harry T. Valentine

    2008-01-01

    A direct method of estimating stand density index in the field, without complex calculations, would be useful in a variety of silvicultural situations. We present just such a method. The approach uses an ordinary prism or other angle gauge, but it involves deliberately "pushing the point" or, in some cases, "pulling the point." This adjusts the...

  19. Estimation of vegetation cover at subpixel resolution using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1986-01-01

    The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.

  20. Improving the Navy’s Passive Underwater Acoustic Monitoring of Marine Mammal Populations

    DTIC Science & Technology

    2013-09-30

    passive acoustic monitoring: Correcting humpback whale call detections for site-specific and time-dependent environmental characteristics ,” JASA Exp...marine mammal species using passive acoustic monitoring, with application to obtaining density estimates of transiting humpback whale populations in...minimize the variance of the density estimates, 3) to apply the numerical modeling methods for humpback whale vocalizations to understand distortions

  1. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  2. Improving the Navy’s Passive Underwater Acoustic Monitoring of Marine Mammal Populations

    DTIC Science & Technology

    2014-09-30

    species using passive acoustic monitoring, with application to obtaining density estimates of transiting humpback whale populations in the Southern...of the density estimates, 3) to apply the numerical modeling methods for humpback whale vocalizations to understand distortions caused by...obtained. The specific approach being followed to accomplish objectives 1-4 above is listed below. 1) Detailed numerical modeling of humpback whale

  3. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  4. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  5. Estimating black bear density in New Mexico using noninvasive genetic sampling coupled with spatially explicit capture-recapture methods

    USGS Publications Warehouse

    Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.

    2016-01-01

    During the 2004–2005 to 2015–2016 hunting seasons, the New Mexico Department of Game and Fish (NMDGF) estimated black bear abundance (Ursus americanus) across the state by coupling density estimates with the distribution of primary habitat generated by Costello et al. (2001). These estimates have been used to set harvest limits. For example, a density of 17 bears/100 km2 for the Sangre de Cristo and Sacramento Mountains and 13.2 bears/100 km2 for the Sandia Mountains were used to set harvest levels. The advancement and widespread acceptance of non-invasive sampling and mark-recapture methods, prompted the NMDGF to collaborate with the New Mexico Cooperative Fish and Wildlife Research Unit and New Mexico State University to update their density estimates for black bear populations in select mountain ranges across the state.We established 5 study areas in 3 mountain ranges: the northern (NSC; sampled in 2012) and southern Sangre de Cristo Mountains (SSC; sampled in 2013), the Sandia Mountains (Sandias; sampled in 2014), and the northern (NSacs) and southern Sacramento Mountains (SSacs; both sampled in 2014). We collected hair samples from black bears using two concurrent non-invasive sampling methods, hair traps and bear rubs. We used a gender marker and a suite of microsatellite loci to determine the individual identification of hair samples that were suitable for genetic analysis. We used these data to generate mark-recapture encounter histories for each bear and estimated density in a spatially explicit capture-recapture framework (SECR). We constructed a suite of SECR candidate models using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We used Akaike’s Information Criterion corrected for small sample size (AICc) to rank and select the most supported model from which we estimated density.We set 554 hair traps, 117 bear rubs and collected 4,083 hair samples. We identified 725 (367 M, 358 F) individuals; the sex ratio for each study area was approximately equal. Our density estimates varied within and among mountain ranges with an estimated density of 21.86 bears/100 km2 (95% CI: 17.83 – 26.80) for the NSC, 19.74 bears/100 km2 (95% CI: 13.77 – 28.30) in the SSC, 25.75 bears/100 km2 (95% CI: 13.22 – 50.14) in the Sandias, 21.86 bears/100 km2 (95% CI: 17.83 – 26.80) in the NSacs, and 16.55 bears/100 km2 (95% CI: 11.64 – 23.53) in the SSacs. Overall detection probability for hair traps and bear rubs, combined, was low across all study areas and ranged from 0.00001 to 0.02. We speculate that detection probabilities were affected by failure of some hair samples to produce a complete genotype due to UV degradation of DNA, and our inability to set and check some sampling devices due to wildfires in the SSC. Ultraviolet radiation levels are particularly high in New Mexico compared to other states where NGS methods have been used because New Mexico receives substantial amounts of sunshine, is relatively high in elevation (1,200 m – 4,000 m), and is at a lower latitude. Despite these sampling difficulties, we were able to produce density estimates for New Mexico black bear populations with levels of precision comparable to estimated black bear densities made elsewhere in the U.S.Our ability to generate reliable black bear density estimates for 3 New Mexico mountain ranges is attributable to our use of a statistically robust study design and analytical method. There are multiple factors that need to be considered when developing future SECR-based density estimation projects. First, the spatial extent of the population of interest and the smallest average home range size must be determined; these will dictate size of the trapping array and spacing necessary between hair traps. The number of technicians needed and access to the study areas will also influence configuration of the trapping array. We believe shorter sampling occasions could be implemented to reduce degradation of DNA due to UV radiation; this might help increase amplification rates and thereby increase both the number of unique individuals identified and the number of recaptures, improving the precision of the density estimates. A pilot study may be useful to determine the length of time hair samples can remain in the field prior to collection. In addition, researchers may consider setting hair traps and bear rubs in more shaded areas (e.g., north facing slopes) to help reduce exposure to UV radiation. To reduce the sampling interval it will be necessary to either hire more field personnel or decrease the number of hair traps per sampling session. Both of these will enhance detection of long-range movement events by individual bears, increase initial capture and recapture rates, and improve precision of the parameter estimates. We recognize that all studies are constrained by limited resources, however, increasing field personnel would also allow a larger study area to be sampled or enable higher trap density.In conclusion, we estimated the density of black bears in 5 study areas within 3 mountains ranges of New Mexico. Our estimates will aid the NMDGF in setting sustainable harvest limits. Along with estimates of density, information on additional demographic rates (e.g., survival rates and reproduction) and the potential effects that climate change and future land use may have on the demography of black bears may also help inform management of black bears in New Mexico, and may be considered as future areas for research.

  6. Alternating steady state free precession for estimation of current-induced magnetic flux density: A feasibility study.

    PubMed

    Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok

    2016-05-01

    To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.

  7. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    PubMed

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  8. A strategy for analysis of (molecular) equilibrium simulations: Configuration space density estimation, clustering, and visualization

    NASA Astrophysics Data System (ADS)

    Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.

    2001-02-01

    We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.

  9. The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions

    USGS Publications Warehouse

    Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

    2002-01-01

    The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

  10. A New Monte Carlo Method for Estimating Marginal Likelihoods.

    PubMed

    Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O

    2018-06-01

    Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.

  11. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM) Using Monte Carlo Simulation.

    PubMed

    Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid

    2016-01-01

    In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.

  12. Relationship and Variation of qPCR and Culturable Enterococci Estimates in Ambient Surface Waters Are Predictable

    EPA Science Inventory

    The quantitative polymerase chain reaction (qPCR) method provides rapid estimates of fecal indicator bacteria densities that have been indicated to be useful in the assessment of water quality. Primarily because this method provides faster results than standard culture-based meth...

  13. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error.

    PubMed

    Carroll, Raymond J; Delaigle, Aurore; Hall, Peter

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  14. Snag densities in relation to human access and associated management factors in forests of Northeastern Oregon, USA

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Barbara C. Wales

    2007-01-01

    A key element of forest management is the maintenance of sufficient densities of snags (standing dead trees) to support associated wildlife. Management factors that influence snag densities, however, are numerous and complex. Consequently, accurate methods to estimate and model snag densities are needed. Using data collected in 2002 and Current Vegetation Survey (CVS)...

  15. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Brad M.; Nathan, Diane L.; Wang Yan

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') andmore » vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r= 0.82, p < 0.001) and processed (r= 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r= 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's {kappa}{>=} 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.« less

  16. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    PubMed Central

    Keller, Brad M.; Nathan, Diane L.; Wang, Yan; Zheng, Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-01-01

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., “FOR PROCESSING”) and vendor postprocessed (i.e., “FOR PRESENTATION”), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies. PMID:22894417

  17. Numerical Study on Density Gradient Carbon-Carbon Composite for Vertical Launching System

    NASA Astrophysics Data System (ADS)

    Yoon, Jin-Young; Kim, Chun-Gon; Lim, Juhwan

    2018-04-01

    This study presents new carbon-carbon (C/C) composite that has a density gradient within single material, and estimates its heat conduction performance by a numerical method. To address the high heat conduction of a high-density C/C, which can cause adhesion separation in the steel structures of vertical launching systems, density gradient carbon-carbon (DGCC) composite is proposed due to its exhibiting low thermal conductivity as well as excellent ablative resistance. DGCC is manufactured by hybridizing two different carbonization processes into a single carbon preform. One part exhibits a low density using phenolic resin carbonization to reduce heat conduction, and the other exhibits a high density using thermal gradient-chemical vapor infiltration for excellent ablative resistance. Numerical analysis for DGCC is performed with a heat conduction problem, and internal temperature distributions are estimated by the forward finite difference method. Material properties of the transition density layer, which is inevitably formed during DGCC manufacturing, are assumed to a combination of two density layers for numerical analysis. By comparing numerical results with experimental data, we validate that DGCC exhibits a low thermal conductivity, and it can serve as highly effective ablative material for vertical launching systems.

  18. Bone mass density estimation: Archimede’s principle versus automatic X-ray histogram and edge detection technique in ovariectomized rats treated with germinated brown rice bioactives

    PubMed Central

    Muhammad, Sani Ismaila; Maznah, Ismail; Mahmud, Rozi Binti; Esmaile, Maher Faik; Zuki, Abu Bakar Zakaria

    2013-01-01

    Background Bone mass density is an important parameter used in the estimation of the severity and depth of lesions in osteoporosis. Estimation of bone density using existing methods in experimental models has its advantages as well as drawbacks. Materials and methods In this study, the X-ray histogram edge detection technique was used to estimate the bone mass density in ovariectomized rats treated orally with germinated brown rice (GBR) bioactives, and the results were compared with estimated results obtained using Archimede’s principle. New bone cell proliferation was assessed by histology and immunohistochemical reaction using polyclonal nuclear antigen. Additionally, serum alkaline phosphatase activity, serum and bone calcium and zinc concentrations were detected using a chemistry analyzer and atomic absorption spectroscopy. Rats were divided into groups of six as follows: sham (nonovariectomized, nontreated); ovariectomized, nontreated; and ovariectomized and treated with estrogen, or Remifemin®, GBR-phenolics, acylated steryl glucosides, gamma oryzanol, and gamma amino-butyric acid extracted from GBR at different doses. Results Our results indicate a significant increase in alkaline phosphatase activity, serum and bone calcium, and zinc and ash content in the treated groups compared with the ovariectomized nontreated group (P < 0.05). Bone density increased significantly (P < 0.05) in groups treated with estrogen, GBR, Remifemin®, and gamma oryzanol compared to the ovariectomized nontreated group. Histological sections revealed more osteoblasts in the treated groups when compared with the untreated groups. A polyclonal nuclear antigen reaction showing proliferating new cells was observed in groups treated with estrogen, Remifemin®, GBR, acylated steryl glucosides, and gamma oryzanol. There was a good correlation between bone mass densities estimated using Archimede’s principle and the edge detection technique between the treated groups (r2 = 0.737, P = 0.004). Conclusion Our study shows that GBR bioactives increase bone density, which might be via the activation of zinc formation and increased calcium content, and that X-ray edge detection technique is effective in the measurement of bone density and can be employed effectively in this respect. PMID:24187491

  19. A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars

    NASA Astrophysics Data System (ADS)

    Zou, Hong; Ye, Yu Guang; Wang, Jin Song; Nielsen, Erling; Cui, Jun; Wang, Xiao Dong

    2016-04-01

    A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars is introduced in this study. The neutral densities at 130 km can be derived from the ionospheric and atmospheric measurements of the Radio Science experiment on board Mars Global Surveyor (MGS). The derived neutral densities cover a large longitude range in northern high latitudes from summer to late autumn during 3 Martian years, which fills the gap of the previous observations for the upper atmosphere of Mars. The simulations of the Laboratoire de Météorologie Dynamique Mars global circulation model can be corrected with a simple linear equation to fit the neutral densities derived from the first MGS/RS (Radio Science) data sets (EDS1). The corrected simulations with the same correction parameters as for EDS1 match the derived neutral densities from two other MGS/RS data sets (EDS2 and EDS3) very well. The derived neutral density from EDS3 shows a dust storm effect, which is in accord with the Mars Express (MEX) Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars measurement. The neutral density derived from the MGS/RS measurements can be used to validate the Martian atmospheric models. The method presented in this study can be applied to other radio occultation measurements, such as the result of the Radio Science experiment on board MEX.

  20. Low-mode internal tides and balanced dynamics disentanglement in altimetric observations: Synergy with surface density observations

    NASA Astrophysics Data System (ADS)

    Ponte, Aurélien L.; Klein, Patrice; Dunphy, Michael; Le Gentil, Sylvie

    2017-03-01

    The performance of a tentative method that disentangles the contributions of a low-mode internal tide on sea level from that of the balanced mesoscale eddies is examined using an idealized high resolution numerical simulation. This disentanglement is essential for proper estimation from sea level of the ocean circulation related to balanced motions. The method relies on an independent observation of the sea surface water density whose variations are 1/dominated by the balanced dynamics and 2/correlate with variations of potential vorticity at depth for the chosen regime of surface-intensified turbulence. The surface density therefore leads via potential vorticity inversion to an estimate of the balanced contribution to sea level fluctuations. The difference between instantaneous sea level (presumably observed with altimetry) and the balanced estimate compares moderately well with the contribution from the low-mode tide. Application to realistic configurations remains to be tested. These results aim at motivating further developments of reconstruction methods of the ocean dynamics based on potential vorticity dynamics arguments. In that context, they are particularly relevant for the upcoming wide-swath high resolution altimetric missions (SWOT).

  1. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  2. Estimating the Occupational Morbidity for Migrant and Seasonal Farmworkers in New York State: a Comparison of Two Methods

    PubMed Central

    Earle-Richardson, Giulia B.; Brower, Melissa A.; Jones, Amanda M.; May, John J.; Jenkins, Paul L.

    2008-01-01

    Purpose To compare occupational morbidity estimates for migrant and seasonal farmworkers obtained from survey methods versus chart review methods, and to estimate the proportion of morbidity treated at federally recognized migrant health centers (MHCs) in a highly agricultural region of New York. Methods Researchers simultaneously conducted: a) an occupational injury and illness survey among agricultural workers; b) MHC chart review; and c) hospital emergency room (ER) chart reviews. Results Of the 24 injuries reported by 550 survey subjects, 54.2% received treatment MHCs 16.7% at ERs, 16.7% at some other facility, and 12.5% were untreated. For injuries treated at MHCs or ERs, the incidence density based on survey methods was 29.3 injuries per 10,000 worker-weeks versus 27.4 by chart review. The standardized morbidity ratio (SMR) for this comparison was 1.07 (95% CI = 0.65 – 1.77). Conclusion Survey data indicate that 71% of agricultural injury and illness can be captured with MHC and ER chart review. MHC and ER incidence density estimates show strong correspondence between the two methods. A chart review-based surveillance system, in conjunction with a correction factor based on periodic worker surveys, would provide a cost-effective estimate of the occupational illness and injury rate in this population. PMID:18063238

  3. A conceptual guide to detection probability for point counts and other count-based survey methods

    Treesearch

    D. Archibald McCallum

    2005-01-01

    Accurate and precise estimates of numbers of animals are vitally needed both to assess population status and to evaluate management decisions. Various methods exist for counting birds, but most of those used with territorial landbirds yield only indices, not true estimates of population size. The need for valid density estimates has spawned a number of models for...

  4. Density and abundance of Wilson's snipe Gallinago delicata in winter in the Lower Mississippi Flyway, USA

    USGS Publications Warehouse

    Carroll, James M.; Krementz, David G.

    2014-01-01

    Wilson's snipe Gallinago delicata is one of the least studied North American game birds, and information on snipe populations and abundance is mostly unknown. We conducted roadside surveys stratified at the township level in the lower Mississippi Alluvial Valley (LMAV) in Arkansas, Mississippi and Louisiana, as well as the Red River Region, and the Gulf Coastal Plain of Louisiana during winters of 2009 and 2010. We identified observer, vegetation cover, and water cover as important covariates in estimating snipe densities. We detected 2915 snipe along 814 line transects (1450 km) for 2009 and 2010 combined. We estimated snipe densities of 8.05 individuals km-2 (95% CI: 4.57-14.17) in 2009, and 2.13 individuals km-2 (95% CI: 1.47-3.08) in 2010. We used the resulting snipe density estimates within the study area to calculate abundance estimates of 1 026 431 (95% CI: 582 707-1 806 774) in 2009, and 271 590 (95% CI: 187 435-392 722) in 2010 for the LMAV. Our data indicate that a road transect survey method is effective for estimating wintering snipe density and abundance in the lower Mississippi Flyway.

  5. Estimation of electrical conductivity distribution within the human head from magnetic flux density measurement.

    PubMed

    Gao, Nuo; Zhu, S A; He, Bin

    2005-06-07

    We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.

  6. Estimating population density for disease risk assessment: The importance of understanding the area of influence of traps using wild pigs as an example.

    PubMed

    Davis, Amy J; Leland, Bruce; Bodenchuk, Michael; VerCauteren, Kurt C; Pepin, Kim M

    2017-06-01

    Population density is a key driver of disease dynamics in wildlife populations. Accurate disease risk assessment and determination of management impacts on wildlife populations requires an ability to estimate population density alongside management actions. A common management technique for controlling wildlife populations to monitor and mitigate disease transmission risk is trapping (e.g., box traps, corral traps, drop nets). Although abundance can be estimated from trapping actions using a variety of analytical approaches, inference is limited by the spatial extent to which a trap attracts animals on the landscape. If the "area of influence" were known, abundance estimates could be converted to densities. In addition to being an important predictor of contact rate and thus disease spread, density is more informative because it is comparable across sites of different sizes. The goal of our study is to demonstrate the importance of determining the area sampled by traps (area of influence) so that density can be estimated from management-based trapping designs which do not employ a trapping grid. To provide one example of how area of influence could be calculated alongside management, we conducted a small pilot study on wild pigs (Sus scrofa) using two removal methods 1) trapping followed by 2) aerial gunning, at three sites in northeast Texas in 2015. We estimated abundance from trapping data with a removal model. We calculated empirical densities as aerial counts divided by the area searched by air (based on aerial flight tracks). We inferred the area of influence of traps by assuming consistent densities across the larger spatial scale and then solving for area impacted by the traps. Based on our pilot study we estimated the area of influence for corral traps in late summer in Texas to be ∼8.6km 2 . Future work showing the effects of behavioral and environmental factors on area of influence will help mangers obtain estimates of density from management data, and determine conditions where trap-attraction is strongest. The ability to estimate density alongside population control activities will improve risk assessment and response operations against disease outbreaks. Published by Elsevier B.V.

  7. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

  8. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, D., E-mail: sorini@mpia-hd.mpg.de

    2017-04-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. Anmore » analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc{sup −1} and within 10% up to k ∼ 0.94 h Mpc{sup −1}, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.« less

  9. Possibilities for Estimating Horizontal Electrical Currents in Active Regions on the Sun

    NASA Astrophysics Data System (ADS)

    Fursyak, Yu. A.; Abramenko, V. I.

    2017-12-01

    Part of the "free" magnetic energy associated with electrical current systems in the active region (AR) is released during solar flares. This proposition is widely accepted and it has stimulated interest in detecting electrical currents in active regions. The vertical component of an electric current in the photosphere can be found by observing the transverse magnetic field. At present, however, there are no direct methods for calculating transverse electric currents based on these observations. These calculations require information on the field vector measured simultaneously at several levels in the photosphere, which has not yet been done with solar instrumentation. In this paper we examine an approach to calculating the structure of the square of the density of a transverse electrical current based on a magnetogram of the vertical component of the magnetic field in the AR. Data obtained with the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory (SDO) for the AR of NOAA AR 11283 are used. It is shown that (1) the observed variations in the magnetic field of a sunspot and the proposed estimate of the density of an annular horizontal current around the spot are consistent with Faraday's law and (2) the resulting estimates of the magnitude of the square of the density of the horizontal current {j}_{\\perp}^2 = (0.002- 0.004) A2/m4 are consistent with previously obtained values of the density of a vertical current in the photosphere. Thus, the proposed estimate is physically significant and this method can be used to estimate the density and structure of transverse electrical currents in the photosphere.

  10. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    USGS Publications Warehouse

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  11. Nonparametric analysis of Minnesota spruce and aspen tree data and LANDSAT data

    NASA Technical Reports Server (NTRS)

    Scott, D. W.; Jee, R.

    1984-01-01

    The application of nonparametric methods in data-intensive problems faced by NASA is described. The theoretical development of efficient multivariate density estimators and the novel use of color graphics workstations are reviewed. The use of nonparametric density estimates for data representation and for Bayesian classification are described and illustrated. Progress in building a data analysis system in a workstation environment is reviewed and preliminary runs presented.

  12. Tigers and their prey: Predicting carnivore densities from prey abundance

    USGS Publications Warehouse

    Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Link, W.A.; Hines, J.E.

    2004-01-01

    The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and their resources has proven extremely difficult, and examples of such predictive ability are very rare. Ecological studies of prey requirements of tigers Panthera tigris led us to develop a simple mechanistic model for predicting tiger density as a function of prey density. We tested our model using data from a landscape-scale long-term (1995-2003) field study that estimated tiger and prey densities in 11 ecologically diverse sites across India. We used field techniques and analytical methods that specifically addressed sampling and detectability, two issues that frequently present problems in macroecological studies of animal populations. Estimated densities of ungulate prey ranged between 5.3 and 63.8 animals per km2. Estimated tiger densities (3.2-16.8 tigers per 100 km2) were reasonably consistent with model predictions. The results provide evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions. In addition to generating important insights into carnivore ecology and conservation, the study provides a potentially useful model for the rigorous conduct of macroecological science.

  13. A hierarchical model for spatial capture-recapture data

    USGS Publications Warehouse

    Royle, J. Andrew; Young, K.V.

    2008-01-01

    Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  15. Micro CT based truth estimation of nodule volume

    NASA Astrophysics Data System (ADS)

    Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.

    2010-03-01

    With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).

  16. Estimations of electron densities and temperatures in He-3 dominated plasmas. [in nuclear pumped lasers

    NASA Technical Reports Server (NTRS)

    Depaola, B. D.; Marcum, S. D.; Wrench, H. K.; Whitten, B. L.; Wells, W. E.

    1979-01-01

    It is very useful to have a method of estimation for electron temperature and electron densities in nuclear pumped plasmas because measurements of such quantities are very difficult. This paper describes a method, based on rate equation analysis of the ionized species in the plasma and the electron energy balance. In addition to the ionized species, certain neutral species must also be calculated. Examples are given for pure helium and a mixture of helium and argon. In the HeAr case, He(+), He2(+), He/2 3S/, Ar(+), Ar2(+), and excited Ar are evaluated.

  17. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  18. Comparison of estimation accuracy of body density between different hydrostatics weighing methods without head submersion.

    PubMed

    Demura, Shinichi; Sato, Susumu; Nakada, Masakatsu; Minami, Masaki; Kitabayashi, Tamotsu

    2003-07-01

    This study compared the accuracy of body density (Db) estimation methods using hydrostatic weighing without complete head submersion (HW(withoutHS)) of Donnelly et al. (1988) and Donnelly and Sintek (1984) as referenced to Goldman and Buskirk's approach (1961). Donnelly et al.'s method estimates Db from a regression equation using HW(withoutHS), moreover, Donnelly and Sintek's method estimates it from HW(withoutHS) and head anthropometric variables. Fifteen Japanese males (173.8+/-4.5 cm, 63.6+/-5.4 kg, 21.2+/-2.8 years) and fifteen females (161.4+/-5.4 cm, 53.8+/-4.8 kg, 21.0+/-1.4 years) participated in this study. All the subjects were measured for head length, width and HWs under the two conditions of with and without head submersion. In order to examine the consistency of estimation values of Db, the correlation coefficients between the estimation values and the reference (Goldman and Buskirk, 1961) were calculated. The standard errors of estimation (SEE) were calculated by regression analysis using a reference value as a dependent variable and estimation values as independent variables. In addition, the systematic errors of two estimation methods were investigated by the Bland-Altman technique (Bland and Altman, 1986). In the estimation, Donnelly and Sintek's equation showed a high relationship with the reference (r=0.960, p<0.01), but had more differences from the reference compared with Donnelly et al.'s equation. Further studies are needed to develop new prediction equations for Japanese considering sex and individual differences in head anthropometry.

  19. Predicting Grizzly Bear Density in Western North America

    PubMed Central

    Mowat, Garth; Heard, Douglas C.; Schwarz, Carl J.

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend. PMID:24367552

  20. Predicting grizzly bear density in western North America.

    PubMed

    Mowat, Garth; Heard, Douglas C; Schwarz, Carl J

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  1. Determination of low-frequency normal modes and structure coefficients using optimal sequence stacking method and autoregressive method in frequency domain

    NASA Astrophysics Data System (ADS)

    Majstorovic, J.; Rosat, S.; Lambotte, S.; Rogister, Y. J. G.

    2017-12-01

    Although there are numerous studies about 3D density Earth model, building an accurate one is still an engaging challenge. One procedure to refine global 3D Earth density models is based on unambiguous measurements of Earth's normal mode eigenfrequencies. To have unbiased eigenfrequency measurements one needs to deal with a variety of time records quality and especially different noise sources, while standard approaches usually include signal processing methods such as Fourier transform. Here we present estimate of complex eigenfrequencies and structure coefficients for several modes below 1 mHz (0S2, 2S1, etc.). Our analysis is performed in three steps. The first step includes the use of stacking methods to enhance specific modes of interest above the observed noise level. Out of three trials the optimal sequence estimation turned out to be the foremost compared to the spherical harmonic stacking method and receiver strip method. In the second step we apply an autoregressive method in the frequency domain to estimate complex eigenfrequencies of target modes. In the third step we apply the phasor walkout method to test and confirm our eigenfrequencies. Before conducting an analysis of time records, we evaluate how the station distribution and noise levels impact the estimate of eigenfrequencies and structure coefficients by using synthetic seismograms calculated for a 3D realistic Earth model, which includes Earth's ellipticity and lateral heterogeneity. Synthetic seismograms are computed by means of normal mode summation using self-coupling and cross-coupling of modes up to 1 mHz. Eventually, the methods tested on synthetic data are applied to long-period seismometer and superconducting gravimeter data recorded after six mega-earthquakes of magnitude greater than 8.3. Hence, we propose new estimates of structure coefficients dependent on the density variations.

  2. Developing accurate survey methods for estimating population sizes and trends of the critically endangered Nihoa Millerbird and Nihoa Finch.

    USGS Publications Warehouse

    Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris

    2012-01-01

    Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa

  3. Integrating resource selection into spatial capture-recapture models for large carnivores

    USGS Publications Warehouse

    Proffitt, Kelly M.; Goldberg, Joshua; Hebblewite, Mark; Russell, Robin E.; Jimenez, Ben; Robinson, Hugh S.; Pilgrim, Kristine; Schwartz, Michael K.

    2015-01-01

    Wildlife managers need reliable methods to estimate large carnivore densities and population trends; yet large carnivores are elusive, difficult to detect, and occur at low densities making traditional approaches intractable. Recent advances in spatial capture-recapture (SCR) models have provided new approaches for monitoring trends in wildlife abundance and these methods are particularly applicable to large carnivores. We applied SCR models in a Bayesian framework to estimate mountain lion densities in the Bitterroot Mountains of west central Montana. We incorporate an existing resource selection function (RSF) as a density covariate to account for heterogeneity in habitat use across the study area and include data collected from harvested lions. We identify individuals through DNA samples collected by (1) biopsy darting mountain lions detected in systematic surveys of the study area, (2) opportunistically collecting hair and scat samples, and (3) sampling all harvested mountain lions. We included 80 DNA samples collected from 62 individuals in the analysis. Including information on predicted habitat use as a covariate on the distribution of activity centers reduced the median estimated density by 44%, the standard deviation by 7%, and the width of 95% credible intervals by 10% as compared to standard SCR models. Within the two management units of interest, we estimated a median mountain lion density of 4.5 mountain lions/100 km2 (95% CI = 2.9, 7.7) and 5.2 mountain lions/100 km2 (95% CI = 3.4, 9.1). Including harvested individuals (dead recovery) did not create a significant bias in the detection process by introducing individuals that could not be detected after removal. However, the dead recovery component of the model did have a substantial effect on results by increasing sample size. The ability to account for heterogeneity in habitat use provides a useful extension to SCR models, and will enhance the ability of wildlife managers to reliably and economically estimate density of wildlife populations, particularly large carnivores.

  4. Estimation of shrub leaf biomass available to white-tailed deer.

    Treesearch

    Lynn L. Rogers; Ronald E. McRoberts

    1992-01-01

    Describes an objective method for using shrub height to estimate leaf biomass within reach of deer. The method can be used in conjunction with surveys of shrub height, shrub density, and shrub species composition to evaluate deer habitat over large areas and to predict trends in forage availability with further forest growth.

  5. Permissible Home Range Estimation (PHRE) in restricted habitats: A new algorithm and an evaluation for sea otters

    USGS Publications Warehouse

    Tarjan, Lily M; Tinker, M. Tim

    2016-01-01

    Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or “PHRE”) that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore). Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.

  6. An evaluation of a bioelectrical impedance analyser for the estimation of body fat content.

    PubMed Central

    Maughan, R J

    1993-01-01

    Measurement of body composition is an important part of any assessment of health or fitness. Hydrostatic weighing is generally accepted as the most reliable method for the measurement of body fat content, but is inconvenient. Electrical impedance analysers have recently been proposed as an alternative to the measurement of skinfold thickness. Both these latter methods are convenient, but give values based on estimates obtained from population studies. This study compared values of body fat content obtained by hydrostatic weighing, skinfold thickness measurement and electrical impedance on 50 (28 women, 22 men) healthy volunteers. Mean(s.e.m.) values obtained by the three methods were: hydrostatic weighing, 20.5(1.2)%; skinfold thickness, 21.8(1.0)%; impedance, 20.8(0.9)%. The results indicate that the correlation between the skinfold method and hydrostatic weighing (0.931) is somewhat higher than that between the impedance method and hydrostatic weighing (0.830). This is, perhaps, not surprising given the fact that the impedance method is based on an estimate of total body water which is then used to calculate body fat content. The skinfold method gives an estimate of body density, and the assumptions involved in the conversion from body density to body fat content are the same for both methods. PMID:8457817

  7. Analysis of percent density estimates from digital breast tomosynthesis projection images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Zhang, Cuiping; Yaffe, Martin J.; Maidment, Andrew D. A.

    2007-03-01

    Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates from DBT source projections since the results would be independent of the reconstruction method. We estimated PD from MLO mammograms (PD M) and from individual DBT projections (PD T). We observed good agreement between PD M and PD T from the central projection images of 40 women. This suggests that variations in breast positioning, dose, and scatter between mammography and DBT do not negatively affect PD estimation. The PD T estimated from individual DBT projections of nine women varied with the angle between the projections. This variation is caused by the 3D arrangement of the breast dense tissue and the acquisition geometry.

  8. Estimating the number of people in crowded scenes

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  9. Variation in center of mass estimates for extant sauropsids and its importance for reconstructing inertial properties of extinct archosaurs.

    PubMed

    Allen, Vivian; Paxton, Heather; Hutchinson, John R

    2009-09-01

    Inertial properties of animal bodies and segments are critical input parameters for biomechanical analysis of standing and moving, and thus are important for paleobiological inquiries into the broader behaviors, ecology and evolution of extinct taxa such as dinosaurs. But how accurately can these be estimated? Computational modeling was used to estimate the inertial properties including mass, density, and center of mass (COM) for extant crocodiles (adult and juvenile Crocodylus johnstoni) and birds (Gallus gallus; junglefowl and broiler chickens), to identify the chief sources of variation and methodological errors, and their significance. High-resolution computed tomography scans were segmented into 3D objects and imported into inertial property estimation software that allowed for the examination of variable body segment densities (e.g., air spaces such as lungs, and deformable body outlines). Considerable biological variation of inertial properties was found within groups due to ontogenetic changes as well as evolutionary changes between chicken groups. COM positions shift in variable directions during ontogeny in different groups. Our method was repeatable and the resolution was sufficient for accurate estimations of mass and density in particular. However, we also found considerable potential methodological errors for COM related to (1) assumed body segment orientation, (2) what frames of reference are used to normalize COM for size-independent comparisons among animals, and (3) assumptions about tail shape. Methods and assumptions are suggested to minimize these errors in the future and thereby improve estimation of inertial properties for extant and extinct animals. In the best cases, 10%-15% errors in these estimates are unavoidable, but particularly for extinct taxa errors closer to 50% should be expected, and therefore, cautiously investigated. Nonetheless in the best cases these methods allow rigorous estimation of inertial properties. (c) 2009 Wiley-Liss, Inc.

  10. Assessing prey fish populations in Lake Michigan: Comparison of simultaneous acoustic-midwater trawling with bottom trawling

    USGS Publications Warehouse

    Fabrizio, Mary C.; Adams, Jean V.; Curtis, Gary L.

    1997-01-01

    The Lake Michigan fish community has been monitored since the 1960s with bottom trawls, and since the late 1980s with acoustics and midwater trawls. These sampling tools are limited to different habitats: bottom trawls sample fish near bottom in areas with smooth substrates, and acoustic methods sample fish throughout the water column above all substrate types. We compared estimates of fish densities and species richness from daytime bottom trawling with those estimated from night-time acoustic and midwater trawling at a range of depths in northeastern Lake Michigan in summer 1995. We examined estimates of total fish density as well as densities of alewife Alosa pseudoharengus (Wilson), bloater Coregonus hoyi (Gill), and rainbow smelt Osmerus mordax (Mitchell) because these three species are the dominant forage of large piscivores in Lake Michigan. In shallow water (18 m), we detected more species but fewer fish (in fish/ha and kg/ha) with bottom trawls than with acoustic-midwater trawling. Large aggregations of rainbow smelt were detected by acoustic-midwater trawling at 18 m and contributed to the differences in total fish density estimates between gears at this depth. Numerical and biomass densitites of bloaters from all depths were significantly higher when based on bottom trawl samples than on acoustic-midwater trawling, and this probably contributed to the observed significant difference between methods for total fish densities (kg/ha) at 55 m. Significantly fewer alewives per ha were estimated from bottom trawling than from acoustics-midwater trawling at 55 m, and in deeper waters, no alewives were taken by bottom trawling. The differences detected between gears resulted from alewife, bloater, and rainbow smelt vertical distributions, which varied with lake depth and time of day. Because Lake Michigan fishes are both demersal and pelagic, a single sampling method cannot be used to completely describe characteristics of the fish community.

  11. Predicting Intra-Urban Population Densities in Africa using SAR and Optical Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Linard, C.; Steele, J.; Forget, Y.; Lopez, J.; Shimoni, M.

    2017-12-01

    The population of Africa is predicted to double over the next 40 years, driving profound social, environmental and epidemiological changes within rapidly growing cities. Estimations of within-city variations in population density must be improved in order to take urban heterogeneities into account and better help urban research and decision making, especially for vulnerability and health assessments. Satellite remote sensing offers an effective solution for mapping settlements and monitoring urbanization at different spatial and temporal scales. In Africa, the urban landscape is covered by slums and small houses, where the heterogeneity is high and where the man-made materials are natural. Innovative methods that combine optical and SAR data are therefore necessary for improving settlement mapping and population density predictions. An automatic method was developed to estimate built-up densities using recent and archived optical and SAR data and a multi-temporal database of built-up densities was produced for 48 African cities. Geo-statistical methods were then used to study the relationships between census-derived population densities and satellite-derived built-up attributes. Best predictors were combined in a Random Forest framework in order to predict intra-urban variations in population density in any large African city. Models show significant improvement of our spatial understanding of urbanization and urban population distribution in Africa in comparison to the state of the art.

  12. A novel deep learning-based approach to high accuracy breast density estimation in digital mammography

    NASA Astrophysics Data System (ADS)

    Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo

    2017-03-01

    Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.

  13. Statins: MedlinePlus Health Topic

    MedlinePlus

    ... for Medical Education and Research) Also in Spanish ... References and abstracts from MEDLINE/PubMed (National Library of Medicine) Article: Novel method versus the Friedewald method for estimating low-density ...

  14. High population density of black-handed spider monkeys (Ateles geoffroyi) in Costa Rican lowland wet forest.

    PubMed

    Weghorst, Jennifer A

    2007-04-01

    The main objective of this study was to estimate the population density and demographic structure of spider monkeys living in wet forest in the vicinity of Sirena Biological Station, Corcovado National Park, Costa Rica. Results of a 14-month line-transect survey showed that spider monkeys of Sirena have one of the highest population densities ever recorded for this genus. Density estimates varied, however, depending on the method chosen to estimate transect width. Data from behavioral monitoring were available to compare density estimates derived from the survey, providing a check of the survey's accuracy. A combination of factors has most probably contributed to the high density of Ateles, including habitat protection within a national park and high diversity of trees of the fig family, Moraceae. Although natural densities of spider monkeys at Sirena are substantially higher than those recorded at most other sites and in previous studies at this site, mean subgroup size and age ratios were similar to those determined in previous studies. Sex ratios were similar to those of other sites with high productivity. Although high densities of preferred fruit trees in the wet, productive forests of Sirena may support a dense population of spider monkeys, other demographic traits recorded at Sirena fall well within the range of values recorded elsewhere for the species.

  15. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  16. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  17. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  18. Deterministic annealing for density estimation by multivariate normal mixtures

    NASA Astrophysics Data System (ADS)

    Kloppenburg, Martin; Tavan, Paul

    1997-03-01

    An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.

  19. Daniell method for power spectral density estimation in atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labuda, Aleksander

    An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less

  20. SU-F-T-687: Comparison of SPECT/CT-Based Methodologies for Estimating Lung Dose from Y-90 Radioembolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kost, S; Yu, N; Lin, S

    2016-06-15

    Purpose: To compare mean lung dose (MLD) estimates from 99mTc macroaggregated albumin (MAA) SPECT/CT using two published methodologies for patients treated with {sup 90}Y radioembolization for liver cancer. Methods: MLD was estimated retrospectively using two methodologies for 40 patients from SPECT/CT images of 99mTc-MAA administered prior to radioembolization. In these two methods, lung shunt fractions (LSFs) were calculated as the ratio of scanned lung activity to the activity in the entire scan volume or to the sum of activity in the lung and liver respectively. Misregistration of liver activity into the lungs during SPECT acquisition was overcome by excluding lungmore » counts within either 2 or 1.5 cm of the diaphragm apex respectively. Patient lung density was assumed to be 0.3 g/cm{sup 3} or derived from CT densitovolumetry respectively. Results from both approaches were compared to MLD determined by planar scintigraphy (PS). The effect of patient size on the difference between MLD from PS and SPECT/CT was also investigated. Results: Lung density from CT densitovolumetry is not different from the reference density (p = 0.68). The second method resulted in lung dose of an average 1.5 times larger lung dose compared to the first method; however the difference between the means of the two estimates was not significant (p = 0.07). Lung dose from both methods were statistically different from those estimated from 2D PS (p < 0.001). There was no correlation between patient size and the difference between MLD from PS and both SPECT/CT methods (r < 0.22, p > 0.17). Conclusion: There is no statistically significant difference between MLD estimated from the two techniques. Both methods are statistically different from conventional PS, with PS overestimating dose by a factor of three or larger. The difference between lung doses estimated from 2D planar or 3D SPECT/CT is not dependent on patient size.« less

  1. La Terra: esperimenti a scuola

    NASA Astrophysics Data System (ADS)

    Roselli, Alessandra; D'Amico, Angelalucia; Pisegna, Daniela; Palma, Francesco; di Nardo, Giustino; Cofini, Marika; Cerasani, Paolo; Cerratti, Valentina

    2006-02-01

    Easy but effective methods used in past centuries allow rediscovery and good knowledge of the planet Earth. The latitude station and planetary radius were measured with Eratosthenes method. The gravity acceleration obtained from pendulum period was used to calculate the terrestrial mass and the density of internal planetary layers. Finally, the estimate of atmosphere density and geometrical thickness complete the view of the planet's properties.

  2. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  3. Validation of the Martin Method for Estimating Low-Density Lipoprotein Cholesterol Levels in Korean Adults: Findings from the Korea National Health and Nutrition Examination Survey, 2009-2011

    PubMed Central

    Lee, Jongseok; Jang, Sungok; Son, Heejeong

    2016-01-01

    Despite the importance of accurate assessment for low-density lipoprotein cholesterol (LDL-C), the Friedewald formula has primarily been used as a cost-effective method to estimate LDL-C when triglycerides are less than 400 mg/dL. In a recent study, an alternative to the formula was proposed to improve estimation of LDL-C. We evaluated the performance of the novel method versus the Friedewald formula using a sample of 5,642 Korean adults with LDL-C measured by an enzymatic homogeneous assay (LDL-CD). Friedewald LDL-C (LDL-CF) was estimated using a fixed factor of 5 for the ratio of triglycerides to very-low-density lipoprotein cholesterol (TG:VLDL-C ratio). However, the novel LDL-C (LDL-CN) estimates were calculated using the N-strata-specific median TG:VLDL-C ratios, LDL-C5 and LDL-C25 from respective ratios derived from our data set, and LDL-C180 from the 180-cell table reported by the original study. Compared with LDL-CF, each LDL-CN estimate exhibited a significantly higher overall concordance in the NCEP-ATP III guideline classification with LDL-CD (p< 0.001 for each comparison). Overall concordance was 78.2% for LDL-CF, 81.6% for LDL-C5, 82.3% for LDL-C25, and 82.0% for LDL-C180. Compared to LDL-C5, LDL-C25 significantly but slightly improved overall concordance (p = 0.008). LDL-C25 and LDL-C180 provided almost the same overall concordance; however, LDL-C180 achieved superior improvement in classifying LDL-C < 70 mg/dL compared to the other estimates. In subjects with triglycerides of 200 to 399 mg/dL, each LDL-CN estimate showed a significantly higher concordance than that of LDL-CF (p< 0.001 for each comparison). The novel method offers a significant improvement in LDL-C estimation when compared with the Friedewald formula. However, it requires further modification and validation considering the racial differences as well as the specific character of the applied measuring method. PMID:26824910

  4. A review of models and micrometeorological methods used to estimate wetland evapotranspiration

    USGS Publications Warehouse

    Drexler, J.Z.; Snyder, R.L.; Spano, D.; Paw, U.K.T.

    2004-01-01

    Within the past decade or so, the accuracy of evapotranspiration (ET) estimates has improved due to new and increasingly sophisticated methods. Yet despite a plethora of choices concerning methods, estimation of wetland ET remains insufficiently characterized due to the complexity of surface characteristics and the diversity of wetland types. In this review, we present models and micrometeorological methods that have been used to estimate wetland ET and discuss their suitability for particular wetland types. Hydrological, soil monitoring and lysimetric methods to determine ET are not discussed. Our review shows that, due to the variability and complexity of wetlands, there is no single approach that is the best for estimating wetland ET. Furthermore, there is no single foolproof method to obtain an accurate, independent measure of wetland ET. Because all of the methods reviewed, with the exception of eddy covariance and LIDAR, require measurements of net radiation (Rn) and soil heat flux (G), highly accurate measurements of these energy components are key to improving measurements of wetland ET. Many of the major methods used to determine ET can be applied successfully to wetlands of uniform vegetation and adequate fetch, however, certain caveats apply. For example, with accurate Rn and G data and small Bowen ratio (??) values, the Bowen ratio energy balance method can give accurate estimates of wetland ET. However, large errors in latent heat flux density can occur near sunrise and sunset when the Bowen ratio ?? ??? - 1??0. The eddy covariance method provides a direct measurement of latent heat flux density (??E) and sensible heat flux density (II), yet this method requires considerable expertise and expensive instrumentation to implement. A clear advantage of using the eddy covariance method is that ??E can be compared with Rn-G H, thereby allowing for an independent test of accuracy. The surface renewal method is inexpensive to replicate and, therefore, shows particular promise for characterizing variability in ET as a result of spatial heterogeneity. LIDAR is another method that has special utility in a heterogeneous wetland environment, because it provides an integrated value for ET from a surface. The main drawback of LIDAR is the high cost of equipment and the need for an independent ET measure to assess accuracy. If Rn and G are measured accurately, the Priestley-Taylor equation can be used successfully with site-specific calibration factors to estimate wetland ET. The 'crop' cover coefficient (Kc) method can provide accurate wetland ET estimates if calibrated for the environmental and climatic characteristics of a particular area. More complicated equations such as the Penman and Penman-Monteith equations also can be used to estimate wetland ET, but surface variability and lack of information on aerodynamic and surface resistances make use of such equations somewhat questionable. ?? 2004 John Wiley and Sons, Ltd.

  5. The effect of dynamic topography and gravity on lithospheric effective elastic thickness estimation: a case study

    NASA Astrophysics Data System (ADS)

    Bai, Yongliang; Dong, Dongdong; Kirby, Jon F.; Williams, Simon E.; Wang, Zhenjie

    2018-04-01

    Lithospheric effective elastic thickness (Te), a proxy for plate strength, is helpful for the understanding of subduction characteristics. Affected by curvature, faulting and magma activity, lithospheric strength near trenches should be weakened but some regional inversion studies have shown much higher Te values along some trenches than in their surroundings. In order to improve Te estimation accuracy, here we discuss the long-wavelength effect of dynamic topography and gravity on Te estimation by taking the Izu-Bonin-Mariana (IBM) Trench as a case study area. We estimate the long-wavelength influence of the density and negative buoyancy of the subducting slab on observed gravity anomalies and seafloor topography. The residual topography and gravity are used to map Te using the fan-wavelet coherence method. Maps of Te, both with and without the effects of dynamic topography and slab gravity anomaly, contain a band of high-Te values along the IBM Trench, though these values and their errors are lower when slab effects are accounted for. Nevertheless, tests show that the Te map is relatively insensitive to the choice of slab-density modelling method, even though the dynamic topography and slab-induced gravity anomaly vary considerably when the slab density is modelled by different methods. The continued presence of a high-Te band along the trench after application of dynamic corrections shows that, before using 2D inversion methods to estimate Te variations in subduction zones, there are other factors that should be considered besides the slab dynamic effects on the overriding plate.

  6. The effect of dynamic topography and gravity on lithospheric effective elastic thickness estimation: a case study

    NASA Astrophysics Data System (ADS)

    Bai, Yongliang; Dong, Dongdong; Kirby, Jon F.; Williams, Simon E.; Wang, Zhenjie

    2018-07-01

    Lithospheric effective elastic thickness (Te), a proxy for plIate strength, is helpful for the understanding of subduction characteristics. Affected by curvature, faulting and magma activity, lithospheric strength near trenches should be weakened but some regional inversion studies have shown much higher Te values along some trenches than in their surroundings. In order to improve Te-estimation accuracy, here we discuss the long-wavelength effect of dynamic topography and gravity on Te estimation by taking the Izu-Bonin-Mariana (IBM) Trench as a case study area. We estimate the long-wavelength influence of the density and negative buoyancy of the subducting slab on observed gravity anomalies and seafloor topography. The residual topography and gravity are used to map Te using the fan-wavelet coherence method. Maps of Te, both with and without the effects of dynamic topography and slab gravity anomaly, contain a band of high-Te values along the IBM Trench, though these values and their errors are lower when slab effects are accounted for. Nevertheless, tests show that the Te map is relatively insensitive to the choice of slab-density modelling method, even though the dynamic topography and slab-induced gravity anomaly vary considerably when the slab density is modelled by different methods. The continued presence of a high-Te band along the trench after application of dynamic corrections shows that, before using 2-D inversion methods to estimate Te variations in subduction zones, there are other factors that should be considered besides the slab dynamic effects on the overriding plate.

  7. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  8. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon) Populations.

    PubMed

    Takeshita, Kazutaka; Ikeda, Takashi; Takahashi, Hiroshi; Yoshida, Tsuyoshi; Igota, Hiromasa; Matsuura, Yukiko; Kaji, Koichi

    2016-01-01

    Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR) methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2) on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered.

  9. A Continuous Method for Gene Flow

    PubMed Central

    Palczewski, Michal; Beerli, Peter

    2013-01-01

    Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937

  10. Estimating occupancy and abundance of stream amphibians using environmental DNA from filtered water samples

    USGS Publications Warehouse

    Pilliod, David S.; Goldberg, Caren S.; Arkle, Robert S.; Waits, Lisette P.

    2013-01-01

    Environmental DNA (eDNA) methods for detecting aquatic species are advancing rapidly, but with little evaluation of field protocols or precision of resulting estimates. We compared sampling results from traditional field methods with eDNA methods for two amphibians in 13 streams in central Idaho, USA. We also evaluated three water collection protocols and the influence of sampling location, time of day, and distance from animals on eDNA concentration in the water. We found no difference in detection or amount of eDNA among water collection protocols. eDNA methods had slightly higher detection rates than traditional field methods, particularly when species occurred at low densities. eDNA concentration was positively related to field-measured density, biomass, and proportion of transects occupied. Precision of eDNA-based abundance estimates increased with the amount of eDNA in the water and the number of replicate subsamples collected. eDNA concentration did not vary significantly with sample location in the stream, time of day, or distance downstream from animals. Our results further advance the implementation of eDNA methods for monitoring aquatic vertebrates in stream habitats.

  11. Rapid assessment of above-ground biomass of Giant Reed using visibility estimates

    USDA-ARS?s Scientific Manuscript database

    A method for the rapid estimation of biomass and density of giant reed (Arundo donax L.) was developed using estimates of visibility as a predictive tool. Visibility estimates were derived by capturing digital images of a 0.25 m2 polystyrene whiteboard placed a set distance (1m) from the edge of gia...

  12. Scattered image artifacts from cone beam computed tomography and its clinical potential in bone mineral density estimation.

    PubMed

    Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50 mg HA per cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.

  13. Estimation of Nanodiamond Surface Charge Density from Zeta Potential and Molecular Dynamics Simulations.

    PubMed

    Ge, Zhenpeng; Wang, Yi

    2017-04-20

    Molecular dynamics simulations of nanoparticles (NPs) are increasingly used to study their interactions with various biological macromolecules. Such simulations generally require detailed knowledge of the surface composition of the NP under investigation. Even for some well-characterized nanoparticles, however, this knowledge is not always available. An example is nanodiamond, a nanoscale diamond particle with surface dominated by oxygen-containing functional groups. In this work, we explore using the harmonic restraint method developed by Venable et al., to estimate the surface charge density (σ) of nanodiamonds. Based on the Gouy-Chapman theory, we convert the experimentally determined zeta potential of a nanodiamond to an effective charge density (σ eff ), and then use the latter to estimate σ via molecular dynamics simulations. Through scanning a series of nanodiamond models, we show that the above method provides a straightforward protocol to determine the surface charge density of relatively large (> ∼100 nm) NPs. Overall, our results suggest that despite certain limitation, the above protocol can be readily employed to guide the model construction for MD simulations, which is particularly useful when only limited experimental information on the NP surface composition is available to a modeler.

  14. Comparison of capture-recapture and visual count indices of prairie dog densities in black-footed ferret habitat

    USGS Publications Warehouse

    Fagerstone, Kathleen A.; Biggins, Dean E.

    1986-01-01

    Black-footed ferrets (Mustela nigripes) are dependent on prairie dogs (Cynomys spp.) for food and on their burrows for shelter and rearing young. A stable prairie dog population may therefore be the most important factor determining the survival of ferrets. A rapid method of determining prairie dog density would be useful for assessing prairie dog density in colonies currently occupied by ferrets and for selecting prairie dog colonies in other areas for ferret translocation. This study showed that visual counts can provide a rapid density estimate. Visual counts of white-tailed prairie dogs (Cynomys leucurus) were significantly correlated (r = 0.95) with mark-recapture population density estimates on two study areas near Meeteetse, Wyoming. Suggestions are given for use of visual counts.

  15. A sparse matrix-vector multiplication based algorithm for accurate density matrix computations on systems of millions of atoms

    NASA Astrophysics Data System (ADS)

    Ghale, Purnima; Johnson, Harley T.

    2018-06-01

    We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.

  16. Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.

    2008-03-01

    Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.

  17. Estimation of Metal Acceleration by an SF5 Containing Explosive

    DTIC Science & Technology

    1991-06-30

    cylinder wall acceleration, the wall energies for the baseline composition and for the composition CW2 were calculated by three different methods : KSM, 6...computaticno! method to calculate the cylinder energies needed for comparison with expermental data, it was decided to compare data from three known methods ...detonation pressure given for the GAB method in Reference 8. They are E(6mm) = 0.1272 (Isp X density)1 .5 - 0.021, and E(l9mm) = 0.1580 (Isp x density) 1 .5

  18. Radiographic absorptiometry method in measurement of localized alveolar bone density changes.

    PubMed

    Kuhl, E D; Nummikoski, P V

    2000-03-01

    The objective of this study was to measure the accuracy and precision of a radiographic absorptiometry method by using an occlusal density reference wedge in quantification of localized alveolar bone density changes. Twenty-two volunteer subjects had baseline and follow-up radiographs taken of mandibular premolar-molar regions with an occlusal density reference wedge in both films and added bone chips in the baseline films. The absolute bone equivalent densities were calculated in the areas that contained bone chips from the baseline and follow-up radiographs. The differences in densities described the masses of the added bone chips that were then compared with the true masses by using regression analysis. The correlation between the estimated and true bone-chip masses ranged from R = 0.82 to 0.94, depending on the background bone density. There was an average 22% overestimation of the mass of the bone chips when they were in low-density background, and up to 69% overestimation when in high-density background. The precision error of the method, which was calculated from duplicate bone density measurements of non-changing areas in both films, was 4.5%. The accuracy of the intraoral radiographic absorptiometry method is low when used for absolute quantification of bone density. However, the precision of the method is good and the correlation is linear, indicating that the method can be used for serial assessment of bone density changes at individual sites.

  19. Estimating Soil Organic Carbon Stocks and Spatial Patterns with Statistical and GIS-Based Methods

    PubMed Central

    Zhi, Junjun; Jing, Changwei; Lin, Shengpan; Zhang, Cao; Liu, Qiankun; DeGloria, Stephen D.; Wu, Jiaping

    2014-01-01

    Accurately quantifying soil organic carbon (SOC) is considered fundamental to studying soil quality, modeling the global carbon cycle, and assessing global climate change. This study evaluated the uncertainties caused by up-scaling of soil properties from the county scale to the provincial scale and from lower-level classification of Soil Species to Soil Group, using four methods: the mean, median, Soil Profile Statistics (SPS), and pedological professional knowledge based (PKB) methods. For the SPS method, SOC stock is calculated at the county scale by multiplying the mean SOC density value of each soil type in a county by its corresponding area. For the mean or median method, SOC density value of each soil type is calculated using provincial arithmetic mean or median. For the PKB method, SOC density value of each soil type is calculated at the county scale considering soil parent materials and spatial locations of all soil profiles. A newly constructed 1∶50,000 soil survey geographic database of Zhejiang Province, China, was used for evaluation. Results indicated that with soil classification levels up-scaling from Soil Species to Soil Group, the variation of estimated SOC stocks among different soil classification levels was obviously lower than that among different methods. The difference in the estimated SOC stocks among the four methods was lowest at the Soil Species level. The differences in SOC stocks among the mean, median, and PKB methods for different Soil Groups resulted from the differences in the procedure of aggregating soil profile properties to represent the attributes of one soil type. Compared with the other three estimation methods (i.e., the SPS, mean and median methods), the PKB method holds significant promise for characterizing spatial differences in SOC distribution because spatial locations of all soil profiles are considered during the aggregation procedure. PMID:24840890

  20. Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.

    PubMed

    Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni

    2018-06-15

    Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.

  1. Stereological estimation of cell wall density of DR12 tomato mutant using three-dimensional confocal imaging

    PubMed Central

    Legland, David; Guillon, Fabienne; Kiêu, Kiên; Bouchet, Brigitte; Devaux, Marie-Françoise

    2010-01-01

    Background and Aims The cellular structure of fleshy fruits is of interest to study fruit shape, size, mechanical behaviour or sensory texture. The cellular structure is usually not observed in the whole fruit but, instead, in a sample of limited size and volume. It is therefore difficult to extend measurements to the whole fruit and/or to a specific genotype, or to describe the cellular structure heterogeneity within the fruit. Methods An integrated method is presented to describe the cellular structure of the whole fruit from partial three-dimensional (3D) observations, involving the following steps: (1) fruit sampling, (2) 3D image acquisition and processing and (3) measurement and estimation of relevant 3D morphological parameters. This method was applied to characterize DR12 mutant and wild-type tomatoes (Solanum lycopersicum). Key Results The cellular structure was described using the total volume of the pericarp, the surface area of the cell walls and the ratio of cell-wall surface area to pericarp volume, referred to as the cell-wall surface density. The heterogeneity of cellular structure within the fruit was investigated by estimating variations in the cell-wall surface density with distance to the epidermis. Conclusions The DR12 mutant presents a greater pericarp volume and an increase of cell-wall surface density under the epidermis. PMID:19952012

  2. A System And Method To Determine Thermophysical Properties Of A Multi-Component Gas At Arbitrary Temperature And Pressure

    DOEpatents

    Morrow, Thomas E.; Behring, II, Kendricks A.

    2004-03-09

    A method to determine thermodynamic properties of a natural gas hydrocarbon, when the speed of sound in the gas is known at an arbitrary temperature and pressure. Thus, the known parameters are the sound speed, temperature, pressure, and concentrations of any dilute components of the gas. The method uses a set of reference gases and their calculated density and speed of sound values to estimate the density of the subject gas. Additional calculations can be made to estimate the molecular weight of the subject gas, which can then be used as the basis for mass flow calculations, to determine the speed of sound at standard pressure and temperature, and to determine various thermophysical characteristics of the gas.

  3. Response to Comment on "Plant diversity increases with the strength of negative density dependence at the global scale".

    PubMed

    LaManna, Joseph A; Mangan, Scott A; Alonso, Alfonso; Bourg, Norman A; Brockelman, Warren Y; Bunyavejchewin, Sarayudh; Chang, Li-Wan; Chiang, Jyh-Min; Chuyong, George B; Clay, Keith; Cordell, Susan; Davies, Stuart J; Furniss, Tucker J; Giardina, Christian P; Gunatilleke, I A U Nimal; Gunatilleke, C V Savitri; He, Fangliang; Howe, Robert W; Hubbell, Stephen P; Hsieh, Chang-Fu; Inman-Narahari, Faith M; Janík, David; Johnson, Daniel J; Kenfack, David; Korte, Lisa; Král, Kamil; Larson, Andrew J; Lutz, James A; McMahon, Sean M; McShea, William J; Memiaghe, Hervé R; Nathalang, Anuttara; Novotny, Vojtech; Ong, Perry S; Orwig, David A; Ostertag, Rebecca; Parker, Geoffrey G; Phillips, Richard P; Sack, Lawren; Sun, I-Fang; Tello, J Sebastián; Thomas, Duncan W; Turner, Benjamin L; Vela Díaz, Dilys M; Vrška, Tomáš; Weiblen, George D; Wolf, Amy; Yap, Sandra; Myers, Jonathan A

    2018-05-25

    Chisholm and Fung claim that our method of estimating conspecific negative density dependence (CNDD) in recruitment is systematically biased, and present an alternative method that shows no latitudinal pattern in CNDD. We demonstrate that their approach produces strongly biased estimates of CNDD, explaining why they do not detect a latitudinal pattern. We also address their methodological concerns using an alternative distance-weighted approach, which supports our original findings of a latitudinal gradient in CNDD and a latitudinal shift in the relationship between CNDD and species abundance. Copyright © 2018, American Association for the Advancement of Science.

  4. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  5. Grizzly bear density in Glacier National Park, Montana

    USGS Publications Warehouse

    Kendall, K.C.; Stetz, J.B.; Roon, David A.; Waits, L.P.; Boulanger, J.B.; Paetkau, David

    2008-01-01

    We present the first rigorous estimate of grizzly bear (Ursus arctos) population density and distribution in and around Glacier National Park (GNP), Montana, USA. We used genetic analysis to identify individual bears from hair samples collected via 2 concurrent sampling methods: 1) systematically distributed, baited, barbed-wire hair traps and 2) unbaited bear rub trees found along trails. We used Huggins closed mixture models in Program MARK to estimate total population size and developed a method to account for heterogeneity caused by unequal access to rub trees. We corrected our estimate for lack of geographic closure using a new method that utilizes information from radiocollared bears and the distribution of bears captured with DNA sampling. Adjusted for closure, the average number of grizzly bears in our study area was 240.7 (95% CI = 202–303) in 1998 and 240.6 (95% CI = 205–304) in 2000. Average grizzly bear density was 30 bears/1,000 km2, with 2.4 times more bears detected per hair trap inside than outside GNP. We provide baseline information important for managing one of the few remaining populations of grizzlies in the contiguous United States.

  6. Modeling utilization distributions in space and time

    USGS Publications Warehouse

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  7. A geographic analysis of population density thresholds in the influenza pandemic of 1918–19

    PubMed Central

    2013-01-01

    Background Geographic variables play an important role in the study of epidemics. The role of one such variable, population density, in the spread of influenza is controversial. Prior studies have tested for such a role using arbitrary thresholds for population density above or below which places are hypothesized to have higher or lower mortality. The results of such studies are mixed. The objective of this study is to estimate, rather than assume, a threshold level of population density that separates low-density regions from high-density regions on the basis of population loss during an influenza pandemic. We study the case of the influenza pandemic of 1918–19 in India, where over 15 million people died in the short span of less than one year. Methods Using data from six censuses for 199 districts of India (n=1194), the country with the largest number of deaths from the influenza of 1918–19, we use a sample-splitting method embedded within a population growth model that explicitly quantifies population loss from the pandemic to estimate a threshold level of population density that separates low-density districts from high-density districts. Results The results demonstrate a threshold level of population density of 175 people per square mile. A concurrent finding is that districts on the low side of the threshold experienced rates of population loss (3.72%) that were lower than districts on the high side of the threshold (4.69%). Conclusions This paper introduces a useful analytic tool to the health geographic literature. It illustrates an application of the tool to demonstrate that it can be useful for pandemic awareness and preparedness efforts. Specifically, it estimates a level of population density above which policies to socially distance, redistribute or quarantine populations are likely to be more effective than they are for areas with population densities that lie below the threshold. PMID:23425498

  8. Community assessment techniques and the implications for rarefaction and extrapolation with Hill numbers.

    PubMed

    Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E

    2017-12-01

    Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.

  9. Estimating Brownian motion dispersal rate, longevity and population density from spatially explicit mark-recapture data on tropical butterflies.

    PubMed

    Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J

    2012-07-01

    1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  10. Estimating abundance

    USGS Publications Warehouse

    Sutherland, Chris; Royle, Andy

    2016-01-01

    This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).

  11. Estimating abundance: Chapter 27

    USGS Publications Warehouse

    Royle, J. Andrew

    2016-01-01

    This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).

  12. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  13. MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples.

    PubMed

    Sivaganesan, Mano; Siefring, Shawn; Varma, Manju; Haugland, Richard A

    2011-12-01

    DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from different studies by this approach, either a consistent source of calibrator cells must be used or the estimates must account for any differences in target sequence recoveries from different sources of calibrator cells. In this report we describe two methods for estimating target sequence recoveries from whole cell calibrator samples based on qPCR analyses of their serially diluted DNA extracts and most probable number (MPN) calculation. The first method employed a traditional MPN calculation approach. The second method employed a Bayesian hierarchical statistical modeling approach and a Monte Carlo Markov Chain (MCMC) simulation method to account for the uncertainty in these estimates associated with different individual samples of the cell preparations, different dilutions of the DNA extracts and different qPCR analytical runs. The two methods were applied to estimate mean target sequence recoveries per cell from two different lots of a commercially available source of enumerated Enterococcus cell preparations. The mean target sequence recovery estimates (and standard errors) per cell from Lot A and B cell preparations by the Bayesian method were 22.73 (3.4) and 11.76 (2.4), respectively, when the data were adjusted for potential false positive results. Means were similar for the traditional MPN approach which cannot comparably assess uncertainty in the estimates. Cell numbers and estimates of recoverable target sequences in calibrator samples prepared from the two cell sources were also used to estimate cell equivalent and target sequence quantities recovered from surface water samples in a comparative Ct method. Our results illustrate the utility of the Bayesian method in accounting for uncertainty, the high degree of precision attainable by the MPN approach and the need to account for the differences in target sequence recoveries from different calibrator sample cell sources when they are used in the comparative Ct method. Published by Elsevier B.V.

  14. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  15. Estimation of transient heat flux density during the heat supply of a catalytic wall steam methane reformer

    NASA Astrophysics Data System (ADS)

    Settar, Abdelhakim; Abboudi, Saïd; Madani, Brahim; Nebbali, Rachid

    2018-02-01

    Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.

  16. Stand density guides for predicting growth of forest tress of southwest Idaho

    Treesearch

    Douglas D. Basford; John Sloan; Joy Roberts

    2010-01-01

    This paper presents a method for estimating stand growth from stand density and average diameter in stands of pure and mixed species in Southwest Idaho. The methods are adapted from a model developed for Douglas-fir, ponderosa pine, and lodgepole pine on the Salmon National Forest. Growth data were derived from ponderosa pine increment cores taken from sample plots on...

  17. Uncertain Photometric Redshifts with Deep Learning Methods

    NASA Astrophysics Data System (ADS)

    D'Isanto, A.

    2017-06-01

    The need for accurate photometric redshifts estimation is a topic that has fundamental importance in Astronomy, due to the necessity of efficiently obtaining redshift information without the need of spectroscopic analysis. We propose a method for determining accurate multi-modal photo-z probability density functions (PDFs) using Mixture Density Networks (MDN) and Deep Convolutional Networks (DCN). A comparison with a Random Forest (RF) is performed.

  18. Direct Importance Estimation with Gaussian Mixture Models

    NASA Astrophysics Data System (ADS)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  19. Simulated maximum likelihood method for estimating kinetic rates in gene expression.

    PubMed

    Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin

    2007-01-01

    Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.

  20. Computing mammographic density from a multiple regression model constructed with image-acquisition parameters from a full-field digital mammographic unit

    PubMed Central

    Lu, Lee-Jane W.; Nishino, Thomas K.; Khamapirad, Tuenchit; Grady, James J; Leonard, Morton H.; Brunder, Donald G.

    2009-01-01

    Breast density (the percentage of fibroglandular tissue in the breast) has been suggested to be a useful surrogate marker for breast cancer risk. It is conventionally measured using screen-film mammographic images by a labor intensive histogram segmentation method (HSM). We have adapted and modified the HSM for measuring breast density from raw digital mammograms acquired by full-field digital mammography. Multiple regression model analyses showed that many of the instrument parameters for acquiring the screening mammograms (e.g. breast compression thickness, radiological thickness, radiation dose, compression force, etc) and image pixel intensity statistics of the imaged breasts were strong predictors of the observed threshold values (model R2=0.93) and %density (R2=0.84). The intra-class correlation coefficient of the %-density for duplicate images was estimated to be 0.80, using the regression model-derived threshold values, and 0.94 if estimated directly from the parameter estimates of the %-density prediction regression model. Therefore, with additional research, these mathematical models could be used to compute breast density objectively, automatically bypassing the HSM step, and could greatly facilitate breast cancer research studies. PMID:17671343

  1. Estimating cavity tree abundance using nearest neighbor imputation methods for western Oregon and Washington forests

    Treesearch

    Hailemariam Temesgen; Tara M. Barrett; Greg Latta

    2008-01-01

    Cavity trees contribute to diverse forest structure and wildlife habitat. For a given stand, the size and density of cavity trees indicate its diversity, complexity, and suitability for wildlife habitat. Size and density of cavity trees vary with stand age, density, and structure. Using Forest Inventory and Analysis (FIA) data collected in western Oregon and western...

  2. Sampling western spruce budworm larvae by frequency of occurrence on lower crown branches.

    Treesearch

    R.R. Mason; R.C. Beckwith

    1990-01-01

    A sampling method was derived whereby budworm density can be estimated by the frequency of occurrence of larvae over a given threshold number instead of by direct counts on branch samples. The model used for converting frequencies to mean densities is appropriate for nonrandom as well as random distributions and, therefore, is applicable to all population densities of...

  3. Extension of biomass estimates to pre-assessment periods using density dependent surplus production approach.

    PubMed

    Horbowy, Jan; Tomczak, Maciej T

    2017-01-01

    Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.

  4. Extension of biomass estimates to pre-assessment periods using density dependent surplus production approach

    PubMed Central

    Horbowy, Jan

    2017-01-01

    Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low. PMID:29131850

  5. An investigation of methods for updating ionospheric scintillation models using topside in-situ plasma density measurements

    NASA Astrophysics Data System (ADS)

    Secan, James A.

    1991-05-01

    Modern military communication, navigation, and surveillance systems depend on reliable, noise-free transionospheric radio-frequency channels. They can be severely impacted by small-scale electron-density irregularities in the ionosphere, which cause both phase and amplitude scintillation. Basic tools used in planning and mitigation schemes are climatological in nature and thus may greatly over- and under-estimate the effects of scintillation in a given scenario. This report summarizes the results of the first year of a three-year investigation into the methods for updating ionospheric scintillation models using observations of ionospheric plasma-density irregularities measured by DMSP Scintillation Meter (SM) sensor. Results are reported from the analysis of data from a campaign conducted in January 1990 near Tromso, Norway, in which near coincident in-situ plasma-density and transionospheric scintillation measurements were made. Estimates for the level of intensity and phase scintillation on a transionospheric UHF radio link in the early-evening auroral zone were calculated from DMSP SM data and compared to the levels actually observed.

  6. Comparison of accelerometer data calibration methods used in thermospheric neutral density estimation

    NASA Astrophysics Data System (ADS)

    Vielberg, Kristin; Forootan, Ehsan; Lück, Christina; Löcher, Anno; Kusche, Jürgen; Börger, Klaus

    2018-05-01

    Ultra-sensitive space-borne accelerometers on board of low Earth orbit (LEO) satellites are used to measure non-gravitational forces acting on the surface of these satellites. These forces consist of the Earth radiation pressure, the solar radiation pressure and the atmospheric drag, where the first two are caused by the radiation emitted from the Earth and the Sun, respectively, and the latter is related to the thermospheric density. On-board accelerometer measurements contain systematic errors, which need to be mitigated by applying a calibration before their use in gravity recovery or thermospheric neutral density estimations. Therefore, we improve, apply and compare three calibration procedures: (1) a multi-step numerical estimation approach, which is based on the numerical differentiation of the kinematic orbits of LEO satellites; (2) a calibration of accelerometer observations within the dynamic precise orbit determination procedure and (3) a comparison of observed to modeled forces acting on the surface of LEO satellites. Here, accelerometer measurements obtained by the Gravity Recovery And Climate Experiment (GRACE) are used. Time series of bias and scale factor derived from the three calibration procedures are found to be different in timescales of a few days to months. Results are more similar (statistically significant) when considering longer timescales, from which the results of approach (1) and (2) show better agreement to those of approach (3) during medium and high solar activity. Calibrated accelerometer observations are then applied to estimate thermospheric neutral densities. Differences between accelerometer-based density estimations and those from empirical neutral density models, e.g., NRLMSISE-00, are observed to be significant during quiet periods, on average 22 % of the simulated densities (during low solar activity), and up to 28 % during high solar activity. Therefore, daily corrections are estimated for neutral densities derived from NRLMSISE-00. Our results indicate that these corrections improve model-based density simulations in order to provide density estimates at locations outside the vicinity of the GRACE satellites, in particular during the period of high solar/magnetic activity, e.g., during the St. Patrick's Day storm on 17 March 2015.

  7. Using spatial capture–recapture to elucidate population processes and space-use in herpetological studies

    USGS Publications Warehouse

    Muñoz, David J.; Miller, David A.W.; Sutherland, Chris; Grant, Evan H. Campbell

    2016-01-01

    The cryptic behavior and ecology of herpetofauna make estimating the impacts of environmental change on demography difficult; yet, the ability to measure demographic relationships is essential for elucidating mechanisms leading to the population declines reported for herpetofauna worldwide. Recently developed spatial capture–recapture (SCR) methods are well suited to standard herpetofauna monitoring approaches. Individually identifying animals and their locations allows accurate estimates of population densities and survival. Spatial capture–recapture methods also allow estimation of parameters describing space-use and movement, which generally are expensive or difficult to obtain using other methods. In this paper, we discuss the basic components of SCR models, the available software for conducting analyses, and the experimental designs based on common herpetological survey methods. We then apply SCR models to Red-backed Salamander (Plethodon cinereus), to determine differences in density, survival, dispersal, and space-use between adult male and female salamanders. By highlighting the capabilities of SCR, and its advantages compared to traditional methods, we hope to give herpetologists the resource they need to apply SCR in their own systems.

  8. An analysis and implications of alternative methods of deriving the density (WPL) terms for eddy covariance flux measurements

    Treesearch

    W. J. Massman; J. -P. Tuovinen

    2006-01-01

    We explore some of the underlying assumptions used to derive the density or WPL terms (Webb et al. (1980) Quart J RoyMeteorol Soc 106:85-100) required for estimating the surface exchange fluxes by eddy covariance. As part of this effort we recast the origin of the density terms as an assumption regarding the density fluctuations rather than as a (dry air) flux...

  9. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation.

    PubMed

    Keller, Brad M; Nathan, Diane L; Wang, Yan; Zheng, Yuanjie; Gee, James C; Conant, Emily F; Kontos, Despina

    2012-08-01

    The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., "FOR PROCESSING") and vendor postprocessed (i.e., "FOR PRESENTATION"), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.

  10. A population-based tissue probability map-driven level set method for fully automated mammographic density estimations.

    PubMed

    Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo

    2014-07-01

    A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.

  11. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. How many fish? Comparison of two underwater visual sampling methods for monitoring fish communities

    PubMed Central

    Sini, Maria; Vatikiotis, Konstantinos; Katsoupis, Christos

    2018-01-01

    Background Underwater visual surveys (UVSs) for monitoring fish communities are preferred over fishing surveys in certain habitats, such as rocky or coral reefs and seagrass beds and are the standard monitoring tool in many cases, especially in protected areas. However, despite their wide application there are potential biases, mainly due to imperfect detectability and the behavioral responses of fish to the observers. Methods The performance of two methods of UVSs were compared to test whether they give similar results in terms of fish population density, occupancy, species richness, and community composition. Distance sampling (line transects) and plot sampling (strip transects) were conducted at 31 rocky reef sites in the Aegean Sea (Greece) using SCUBA diving. Results Line transects generated significantly higher values of occupancy, species richness, and total fish density compared to strip transects. For most species, density estimates differed significantly between the two sampling methods. For secretive species and species avoiding the observers, the line transect method yielded higher estimates, as it accounted for imperfect detectability and utilized a larger survey area compared to the strip transect method. On the other hand, large-scale spatial patterns of species composition were similar for both methods. Discussion Overall, both methods presented a number of advantages and limitations, which should be considered in survey design. Line transects appear to be more suitable for surveying secretive species, while strip transects should be preferred at high fish densities and for species of high mobility. PMID:29942703

  13. Determination of CME 3D parameters based on a new full ice-cream cone model

    NASA Astrophysics Data System (ADS)

    Na, Hyeonock; Moon, Yong-Jae

    2017-08-01

    In space weather forecast, it is important to determine three-dimensional properties of CMEs. Using 29 limb CMEs, we examine which cone type is close to a CME three-dimensional structure. We find that most CMEs have near full ice-cream cone structure which is a symmetrical circular cone combined with a hemisphere. We develop a full ice-cream cone model based on a new methodology that the full ice-cream cone consists of many flat cones with different heights and angular widths. By applying this model to 12 SOHO/LASCO halo CMEs, we find that 3D parameters from our method are similar to those from other stereoscopic methods (i.e., a triangulation method and a Graduated Cylindrical Shell model). In addition, we derive CME mean density (ρmean=Mtotal/Vcone) based on the full ice-cream cone structure. For several limb events, we determine CME mass by applying the Solarsoft procedure (e.g., cme_mass.pro) to SOHO/LASCO C3 images. CME volumes are estimated from the full ice-cream cone structure. From the power-law relationship between CME mean density and its height, we estimate CME mean densities at 20 solar radii (Rs). We will compare the CME densities at 20 Rs with their corresponding ICME densities.

  14. Density and population estimate of gibbons (Hylobates albibarbis) in the Sabangau catchment, Central Kalimantan, Indonesia.

    PubMed

    Cheyne, Susan M; Thompson, Claire J H; Phillips, Abigail C; Hill, Robyn M C; Limin, Suwido H

    2008-01-01

    We demonstrate that although auditory sampling is a useful tool, this method alone will not provide a truly accurate indication of population size, density and distribution of gibbons in an area. If auditory sampling alone is employed, we show that data collection must take place over a sufficient period to account for variation in calling patterns across seasons. The population of Hylobates albibarbis in the Sabangau catchment, Central Kalimantan, Indonesia, was surveyed from July to December 2005 using methods established previously. In addition, auditory sampling was complemented by detailed behavioural data on six habituated groups within the study area. Here we compare results from this study to those of a 1-month study conducted in 2004. The total population of the Sabangau catchment is estimated to be about in the tens of thousands, though numbers, distribution and density for the different forest subtypes vary considerably. We propose that future density surveys of gibbons must include data from all forest subtypes where gibbons are found and that extrapolating from one forest subtype is likely to yield inaccurate density and population estimates. We also propose that auditory census be carried out by using at least three listening posts (LP) in order to increase the area sampled and the chances of hearing groups. Our results suggest that the Sabangau catchment contains one of the largest remaining contiguous populations of Bornean agile gibbon.

  15. Light intensity related to stand density in mature stands of the western white pine type

    Treesearch

    C. A. Wellner

    1948-01-01

    Where tolerance of forest trees or subordinate vegetation is a factor in management, the forester needs a simple field method of Estimating or forecasting light intensities in forest stands. The following article describes a method developed for estimating light intensity beneath the canopy in western white pine forests which may have application in other types.

  16. Detection limit for rate fluctuations in inhomogeneous Poisson processes

    NASA Astrophysics Data System (ADS)

    Shintani, Toshiaki; Shinomoto, Shigeru

    2012-04-01

    Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.

  17. Detection limit for rate fluctuations in inhomogeneous Poisson processes.

    PubMed

    Shintani, Toshiaki; Shinomoto, Shigeru

    2012-04-01

    Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.

  18. Remote sensing of Myriophyllum spicatum L. in a shallow, eutrophic lake

    NASA Technical Reports Server (NTRS)

    Gustafson, T. D.; Adams, M. S.

    1973-01-01

    An aerial 35 mm system was used for the acquisition of vertical color and color infrared imagery of the submergent aquatic macrophytes of Lake Wingra, Wisconsin. A method of photographic interpretation of stem density classes is tested for its ability to make standing crop biomass estimates of Myriophyllum spicatum. The results of film image density analysis are significantly correlated with stem densities and standing crop biomass of Myriophyllum and with the biomass of Oedogonium mats. Photographic methods are contrasted with conventional harvest procedures for efficiency and accuracy.

  19. Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations

    NASA Astrophysics Data System (ADS)

    Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

    2009-04-01

    Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

  20. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  1. Trapezium Bone Density-A Comparison of Measurements by DXA and CT.

    PubMed

    Breddam Mosegaard, Sebastian; Breddam Mosegaard, Kamille; Bouteldja, Nadia; Bæk Hansen, Torben; Stilling, Maiken

    2018-01-18

    Bone density may influence the primary fixation of cementless implants, and poor bone density may increase the risk of implant failure. Before deciding on using total joint replacement as treatment in osteoarthritis of the trapeziometacarpal joint, it is valuable to determine the trapezium bone density. The aim of this study was to: (1) determine the correlation between measurements of bone mineral density of the trapezium obtained by dual-energy X-ray absorptiometry (DXA) scans by a circumference method and a new inner-ellipse method; and (2) to compare those to measurements of bone density obtained by computerized tomography (CT)-scans in Hounsfield units (HU). We included 71 hands from 59 patients with a mean age of 59 years (43-77). All patients had Eaton-Glickel stage II-IV trapeziometacarpal (TM) joint osteoarthritis, were under evaluation for trapeziometacarpal total joint replacement, and underwent DXA and CT wrist scans. There was an excellent correlation (r = 0.94) between DXA bone mineral density measures using the circumference and the inner-ellipse method. There was a moderate correlation between bone density measures obtained by DXA- and CT-scans with (r = 0.49) for the circumference method, and (r = 0.55) for the inner-ellipse method. DXA may be used in pre-operative evaluation of the trapezium bone quality, and the simpler DXA inner-ellipse measurement method can replace the DXA circumference method in estimation of bone density of the trapezium.

  2. Shape information from a critical point analysis of calculated electron density maps: application to DNA-drug systems

    NASA Astrophysics Data System (ADS)

    Leherte, L.; Allen, F. H.; Vercauteren, D. P.

    1995-04-01

    A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.

  3. Shape information from a critical point analysis of calculated electron density maps: Application to DNA-drug systems

    NASA Astrophysics Data System (ADS)

    Leherte, Laurence; Allen, Frank H.

    1994-06-01

    A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.

  4. How Much Water is in That Snowpack? Improving Basin-wide Snow Water Equivalent Estimates from the Airborne Snow Observatory

    NASA Astrophysics Data System (ADS)

    Bormann, K.; Painter, T. H.; Marks, D. G.; Kirchner, P. B.; Winstral, A. H.; Ramirez, P.; Goodale, C. E.; Richardson, M.; Berisford, D. F.

    2014-12-01

    In the western US, snowmelt from the mountains contribute the vast majority of fresh water supply, in an otherwise dry region. With much of California currently experiencing extreme drought, it is critical for water managers to have accurate basin-wide estimations of snow water content during the spring melt season. At the forefront of basin-scale snow monitoring is the Jet Propulsion Laboratory's Airborne Snow Observatory (ASO). With combined LiDAR /spectrometer instruments and weekly flights over key basins throughout California, the ASO suite is capable of retrieving high-resolution basin-wide snow depth and albedo observations. To make best use of these high-resolution snow depths, spatially distributed snow density data are required to leverage snow water equivalent (SWE) from the measured depths. Snow density is a spatially and temporally variable property and is difficult to estimate at basin scales. Currently, ASO uses a physically based snow model (iSnobal) to resolve distributed snow density dynamics across the basin. However, there are issues with the density algorithms in iSnobal, particularly with snow depths below 0.50 m. This shortcoming limited the use of snow density fields from iSnobal during the poor snowfall year of 2014 in the Sierra Nevada, where snow depths were generally low. A deeper understanding of iSnobal model performance and uncertainty for snow density estimation is required. In this study, the model is compared to an existing climate-based statistical method for basin-wide snow density estimation in the Tuolumne basin in the Sierra Nevada and sparse field density measurements. The objective of this study is to improve the water resource information provided to water managers during ASO operation in the future by reducing the uncertainty introduced during the snow depth to SWE conversion.

  5. Estimating cosmic velocity fields from density fields and tidal tensors

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-10-01

    In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.

  6. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon) Populations

    PubMed Central

    Takeshita, Kazutaka; Yoshida, Tsuyoshi; Igota, Hiromasa; Matsuura, Yukiko

    2016-01-01

    Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR) methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2) on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered. PMID:27711181

  7. Ionospheric tomography by gradient-enhanced kriging with STEC measurements and ionosonde characteristics

    NASA Astrophysics Data System (ADS)

    Minkwitz, David; van den Boogaart, Karl Gerald; Gerzen, Tatjana; Hoque, Mainul; Hernández-Pajares, Manuel

    2016-11-01

    The estimation of the ionospheric electron density by kriging is based on the optimization of a parametric measurement covariance model. First, the extension of kriging with slant total electron content (STEC) measurements based on a spatial covariance to kriging with a spatial-temporal covariance model, assimilating STEC data of a sliding window, is presented. Secondly, a novel tomography approach by gradient-enhanced kriging (GEK) is developed. Beyond the ingestion of STEC measurements, GEK assimilates ionosonde characteristics, providing peak electron density measurements as well as gradient information. Both approaches deploy the 3-D electron density model NeQuick as a priori information and estimate the covariance parameter vector within a maximum likelihood estimation for the dedicated tomography time stamp. The methods are validated in the European region for two periods covering quiet and active ionospheric conditions. The kriging with spatial and spatial-temporal covariance model is analysed regarding its capability to reproduce STEC, differential STEC and foF2. Therefore, the estimates are compared to the NeQuick model results, the 2-D TEC maps of the International GNSS Service and the DLR's Ionospheric Monitoring and Prediction Center, and in the case of foF2 to two independent ionosonde stations. Moreover, simulated STEC and ionosonde measurements are used to investigate the electron density profiles estimated by the GEK in comparison to a kriging with STEC only. The results indicate a crucial improvement in the initial guess by the developed methods and point out the potential compensation for a bias in the peak height hmF2 by means of GEK.

  8. Using spatial mark-recapture for conservation monitoring of grizzly bear populations in Alberta.

    PubMed

    Boulanger, John; Nielsen, Scott E; Stenhouse, Gordon B

    2018-03-26

    One of the challenges in conservation is determining patterns and responses in population density and distribution as it relates to habitat and changes in anthropogenic activities. We applied spatially explicit capture recapture (SECR) methods, combined with density surface modelling from five grizzly bear (Ursus arctos) management areas (BMAs) in Alberta, Canada, to assess SECR methods and to explore factors influencing bear distribution. Here we used models of grizzly bear habitat and mortality risk to test local density associations using density surface modelling. Results demonstrated BMA-specific factors influenced density, as well as the effects of habitat and topography on detections and movements of bears. Estimates from SECR were similar to those from closed population models and telemetry data, but with similar or higher levels of precision. Habitat was most associated with areas of higher bear density in the north, whereas mortality risk was most associated (negatively) with density of bears in the south. Comparisons of the distribution of mortality risk and habitat revealed differences by BMA that in turn influenced local abundance of bears. Combining SECR methods with density surface modelling increases the resolution of mark-recapture methods by directly inferring the effect of spatial factors on regulating local densities of animals.

  9. Radiomic modeling of BI-RADS density categories

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Hadjiiski, Lubomir

    2017-03-01

    Screening mammography is the most effective and low-cost method to date for early cancer detection. Mammographic breast density has been shown to be highly correlated with breast cancer risk. We are developing a radiomic model for BI-RADS density categorization on digital mammography (FFDM) with a supervised machine learning approach. With IRB approval, we retrospectively collected 478 FFDMs from 478 women. As a gold standard, breast density was assessed by an MQSA radiologist based on BI-RADS categories. The raw FFDMs were used for computerized density assessment. The raw FFDM first underwent log-transform to approximate the x-ray sensitometric response, followed by multiscale processing to enhance the fibroglandular densities and parenchymal patterns. Three ROIs were automatically identified based on the keypoint distribution, where the keypoints were obtained as the extrema in the image Gaussian scale-space. A total of 73 features, including intensity and texture features that describe the density and the parenchymal pattern, were extracted from each breast. Our BI-RADS density estimator was constructed by using a random forest classifier. We used a 10-fold cross validation resampling approach to estimate the errors. With the random forest classifier, computerized density categories for 412 of the 478 cases agree with radiologist's assessment (weighted kappa = 0.93). The machine learning method with radiomic features as predictors demonstrated a high accuracy in classifying FFDMs into BI-RADS density categories. Further work is underway to improve our system performance as well as to perform an independent testing using a large unseen FFDM set.

  10. Multi-Paradigm Multi-Scale Simulations for Fuel Cell Catalysts and Membranes

    DTIC Science & Technology

    2006-01-01

    transfer studies on model systems. . Applying newly developed density functionals QM ( X3LYP ) for estimating the thermodynamics and kinetic energy...Density functional theory methods We have used many QM methods to probe chemical reaction mechanisms and find that the B3LYP and X3LYP [6] flavors of DFT...carried out QM calculations on the surface reactivity of the Pt and PtRu anode catalysts. This QM uses a new ab initio DFT-GGA method ( X3LYP ) [6

  11. Comparison of multiple methods to measure maternal fat mass in late gestation12

    PubMed Central

    Marshall, Nicole E; Murphy, Elizabeth J; King, Janet C; Haas, E Kate; Lim, Jeong Y; Wiedrick, Jack; Thornburg, Kent L; Purnell, Jonathan Q

    2016-01-01

    Background: Measurements of maternal fat mass (FM) are important for studies of maternal and fetal health. Common methods of estimating FM have not been previously compared in pregnancy with measurements using more complete body composition models. Objectives: The goal of this pilot study was to compare multiple methods that estimate FM, including 2-, 3- and 4-compartment models in pregnant women at term, and to determine how these measures compare with FM by dual-energy X-ray absorptiometry (DXA) 2 wk postpartum. Design: Forty-one healthy pregnant women with prepregnancy body mass index (in kg/m2) 19 to 46 underwent skinfold thickness (SFT), bioelectrical impedance analysis (BIA), body density (Db) via air displacement plethysmography (ADP), and deuterium dilution of total body water (TBW) with and without adjustments for gestational age using van Raaij (VRJ) equations at 37–38 wk of gestation and 2 wk postpartum to derive 8 estimates of maternal FM. Deming regression analysis and Bland-Altman plots were used to compare methods of FM assessment. Results: Systematic differences in FM estimates were found. Methods for FM estimates from lowest to highest were 4-compartment, DXA, TBW(VRJ), 3-compartment, Db(VRJ), BIA, air displacement plethysmography body density, and SFT ranging from a mean ± SD of 29.5 ± 13.2 kg via 4-compartment to 39.1 ± 11.7 kg via SFT. Compared with postpartum DXA values, Deming regressions revealed no substantial departures from trend lines in maternal FM in late pregnancy for any of the methods. The 4-compartment method showed substantial negative (underestimating) constant bias, and the air displacement plethysmography body density and SFT methods showed positive (overestimating) constant bias. ADP via Db(VRJ) and 3-compartment methods had the highest precision; BIA had the lowest. Conclusions: ADP that uses gestational age-specific equations may provide a reasonable and practical measurement of maternal FM across a spectrum of body weights in late pregnancy. SFT would be acceptable for use in larger studies. This trial was registered at clinicaltrials.gov as NCT02586714. PMID:26888714

  12. Productivity and population density estimates of the dengue vector mosquito Aedes aegypti (Stegomyia aegypti) in Australia.

    PubMed

    Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A

    2013-09-01

    New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. © 2012 The Royal Entomological Society.

  13. Influence of entanglements on glass transition temperature of polystyrene

    NASA Astrophysics Data System (ADS)

    Ougizawa, Toshiaki; Kinugasa, Yoshinori

    2013-03-01

    Chain entanglement is essential behavior of polymeric molecules and it seems to affect many physical properties such as not only viscosity of melt state but also glass transition temperature (Tg). But we have not attained the quantitative estimation because the entanglement density is considered as an intrinsic value of the polymer at melt state depending on the chemical structure. Freeze-drying method is known as one of the few ways to make different entanglement density sample from dilute solution. In this study, the influence of entanglements on Tg of polystyrene obtained by the freeze-dried method was estimated quantitatively. The freeze-dried samples showed Tg depression with decreasing the concentration of precursor solution due to the lower entanglement density and their depressed Tg would be saturated when the almost no intermolecular entanglement was formed. The molecular weight dependence of the maximum value of Tg depression was discussed.

  14. AXIALLY ORIENTED SECTIONS OF NUMMULITIDS: A TOOL TO INTERPRET LARGER BENTHIC FORAMINIFERAL DEPOSITS

    PubMed Central

    Hohenegger, Johann; Briguglio, Antonino

    2015-01-01

    The “critical shear velocity” and “settling velocity” of foraminiferal shells are important parameters for determining hydrodynamic conditions during deposition of Nummulites banks. These can be estimated by determining the size, shape, and density of nummulitid shells examined in axial sections cut perpendicular to the bedding plane. Shell size and shape can be determined directly from the shell diameter and thickness, but density must be calculated indirectly from the thin section. Calculations using the half-tori method approximate shell densities by equalizing the chamber volume of each half whorl, based on the half whorl’s lumen area and its center of gravity. Results from this method yield the same lumen volumes produced empirically by micro-computed tomography. The derived hydrodynamic parameters help estimate the minimum flow velocities needed to entrain nummulitid tests and provide a potential tool to account for the nature of their accumulations. PMID:26166914

  15. AXIALLY ORIENTED SECTIONS OF NUMMULITIDS: A TOOL TO INTERPRET LARGER BENTHIC FORAMINIFERAL DEPOSITS.

    PubMed

    Hohenegger, Johann; Briguglio, Antonino

    2012-04-01

    The "critical shear velocity" and "settling velocity" of foraminiferal shells are important parameters for determining hydrodynamic conditions during deposition of Nummulites banks. These can be estimated by determining the size, shape, and density of nummulitid shells examined in axial sections cut perpendicular to the bedding plane. Shell size and shape can be determined directly from the shell diameter and thickness, but density must be calculated indirectly from the thin section. Calculations using the half-tori method approximate shell densities by equalizing the chamber volume of each half whorl, based on the half whorl's lumen area and its center of gravity. Results from this method yield the same lumen volumes produced empirically by micro-computed tomography. The derived hydrodynamic parameters help estimate the minimum flow velocities needed to entrain nummulitid tests and provide a potential tool to account for the nature of their accumulations.

  16. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  17. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  18. Temporal variations of potential fecundity of southern blue whiting (Micromesistius australis australis) in the Southeast Pacific

    NASA Astrophysics Data System (ADS)

    Flores, Andrés; Wiff, Rodrigo; Díaz, Eduardo; Carvajal, Bernardita

    2017-08-01

    Fecundity is a key aspect of fish species reproductive biology because it relates directly to total egg production. Yet, despite such importance, fecundity estimates are lacking or scarce for several fish species. The gravimetric method is the most-used one to estimate fecundity by essentially scaling up the oocyte density to the ovary weight. It is a relatively simple and precise technique, but also time consuming because it requires counting all oocytes in an ovary subsample. The auto-diametric method, on the other hand, is relatively new for estimating fecundity, representing a rapid alternative, because it requires only an estimation of mean oocyte density from mean oocyte diameter. Using the extensive database available from commercial fishery and design surveys for southern blue whiting Micromesistius australis australis in the Southeast Pacific, we compared estimates of fecundity using both gravimetric and auto-diametric methods. Temporal variations in potential fecundity from the auto-diametric method were evaluated using generalised linear models considering predictors from maternal characteristics such as female size, condition factor, oocyte size, and gonadosomatic index. A global and time-invariant auto-diametric equation was evaluated using a simulation procedure based on non-parametric bootstrap. Results indicated there were not significant differences regarding fecundity estimates between the gravimetric and auto-diametric method (p > 0.05). Simulation showed the application of a global equation is unbiased and sufficiently precise to estimate time-invariant fecundity of this species. Temporal variations on fecundity were explained by maternal characteristic, revealing signals of fecundity down-regulation. We discuss how oocyte size and nutritional condition (measured as condition factor) are one of the important factors determining fecundity. We highlighted also the relevance of choosing the appropriate sampling period to conduct maturity studies and ensure precise estimates of fecundity of this species.

  19. Validation of Spherically Symmetric Inversion by Use of a Tomographically Reconstructed Three-Dimensional Electron Density of the Solar Corona

    NASA Technical Reports Server (NTRS)

    Wang, Tongjiang; Davila, Joseph M.

    2014-01-01

    Determining the coronal electron density by the inversion of white-light polarized brightness (pB) measurements by coronagraphs is a classic problem in solar physics. An inversion technique based on the spherically symmetric geometry (spherically symmetric inversion, SSI) was developed in the 1950s and has been widely applied to interpret various observations. However, to date there is no study of the uncertainty estimation of this method. We here present the detailed assessment of this method using a three-dimensional (3D) electron density in the corona from 1.5 to 4 solar radius as a model, which is reconstructed by a tomography method from STEREO/COR1 observations during the solar minimum in February 2008 (Carrington Rotation, CR 2066).We first show in theory and observation that the spherically symmetric polynomial approximation (SSPA) method and the Van de Hulst inversion technique are equivalent. Then we assess the SSPA method using synthesized pB images from the 3D density model, and find that the SSPA density values are close to the model inputs for the streamer core near the plane of the sky (POS) with differences generally smaller than about a factor of two; the former has the lower peak but extends more in both longitudinal and latitudinal directions than the latter. We estimate that the SSPA method may resolve the coronal density structure near the POS with angular resolution in longitude of about 50 deg. Our results confirm the suggestion that the SSI method is applicable to the solar minimum streamer (belt), as stated in some previous studies. In addition, we demonstrate that the SSPA method can be used to reconstruct the 3D coronal density, roughly in agreement with the reconstruction by tomography for a period of low solar activity (CR 2066). We suggest that the SSI method is complementary to the 3D tomographic technique in some cases, given that the development of the latter is still an ongoing research effort.

  20. An Evaluation of the Pea Pod System for Assessing Body Composition of Moderately Premature Infants.

    PubMed

    Forsum, Elisabet; Olhager, Elisabeth; Törnqvist, Caroline

    2016-04-22

    (1) BACKGROUND: Assessing the quality of growth in premature infants is important in order to be able to provide them with optimal nutrition. The Pea Pod device, based on air displacement plethysmography, is able to assess body composition of infants. However, this method has not been sufficiently evaluated in premature infants; (2) METHODS: In 14 infants in an age range of 3-7 days, born after 32-35 completed weeks of gestation, body weight, body volume, fat-free mass density (predicted by the Pea Pod software), and total body water (isotope dilution) were assessed. Reference estimates of fat-free mass density and body composition were obtained using a three-component model; (3) RESULTS: Fat-free mass density values, predicted using Pea Pod, were biased but not significantly (p > 0.05) different from reference estimates. Body fat (%), assessed using Pea Pod, was not significantly different from reference estimates. The biological variability of fat-free mass density was 0.55% of the average value (1.0627 g/mL); (4) CONCLUSION: The results indicate that the Pea Pod system is accurate for groups of newborn, moderately premature infants. However, more studies where this system is used for premature infants are needed, and we provide suggestions regarding how to develop this area.

  1. Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique

    NASA Astrophysics Data System (ADS)

    Boulanger, Olivier

    The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)

  2. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536

  3. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  4. Developing population models with data from marked individuals

    USGS Publications Warehouse

    Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,

    2016-01-01

    Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.

  5. Fast nonlinear gravity inversion in spherical coordinates with application to the South American Moho

    NASA Astrophysics Data System (ADS)

    Uieda, Leonardo; Barbosa, Valéria C. F.

    2017-01-01

    Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.

  6. Electronic polarizability of light crude oil from optical and dielectric studies

    NASA Astrophysics Data System (ADS)

    George, A. K.; Singh, R. N.

    2017-07-01

    In the present paper we report the temperature dependence of density, refractive indices and dielectric constant of three samples of crude oils. The API gravity number estimated from the temperature dependent density studies revealed that the three samples fall in the category of light oil. The measured data of refractive index and the density are used to evaluate the polarizability of these fluids. Molar refractive index and the molar volume are evaluated through Lorentz-Lorenz equation. The function of the refractive index, FRI , divided by the mass density ρ, is a constant approximately equal to one-third and is invariant with temperature for all the samples. The measured values of the dielectric constant decrease linearly with increasing temperature for all the samples. The dielectric constant estimated from the refractive index measurements using Lorentz-Lorentz equation agrees well with the measured values. The results are promising since all the three measured properties complement each other and offer a simple and reliable method for estimating crude oil properties, in the absence of sufficient data.

  7. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    PubMed

    Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.

  8. Face Value: Towards Robust Estimates of Snow Leopard Densities

    PubMed Central

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682

  9. Estimating abundance and density of Amur tigers along the Sino-Russian border.

    PubMed

    Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping

    2016-07-01

    As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  10. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    PubMed

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  11. Using the Opposition Effect in Remotely Sensed Data to Assist in the Retrieval of Bulk Density

    NASA Astrophysics Data System (ADS)

    Ambeau, Brittany L.

    Bulk density is an important geophysical property that impacts the mobility of military vehicles and personnel. Accurate retrieval of bulk density from remotely sensed data is, therefore, needed to estimate the mobility on "off-road" terrain. For a particulate surface, the functional form of the opposition effect can provide valuable information about composition and structure. In this research, we examine the relationship between bulk density and angular width of the opposition effect for a controlled set of laboratory experiments. Given a sample with a known bulk density, we collect reflectance measurements on a spherical grid for various illumination and view geometries -- increasing the amount of reflectance measurements collected at small phase angles near the opposition direction. Bulk densities are varied using a custom-made pluviation device, samples are measured using the Goniometer of the Rochester Institute of Technology-Two (GRIT-T), and observations are fit to the Hapke model using a grid-search method. The method that is selected allows for the direct estimation of five parameters: the single-scattering albedo, the amplitude of the opposition effect, the angular width of the opposition effect, and the two parameters that describe the single-particle phase function. As a test of the Hapke model, the retrieved bulk densities are compared to the known bulk densities. Results show that with an increase in the availability of multi-angular reflectance measurements, the prospects for retrieving the spatial distribution of bulk density from satellite and airborne sensors are imminent.

  12. A comparison of approaches for estimating bottom-sediment mass in large reservoirs

    USGS Publications Warehouse

    Juracek, Kyle E.

    2006-01-01

    Estimates of sediment and sediment-associated constituent loads and yields from drainage basins are necessary for the management of reservoir-basin systems to address important issues such as reservoir sedimentation and eutrophication. One method for the estimation of loads and yields requires a determination of the total mass of sediment deposited in a reservoir. This method involves a sediment volume-to-mass conversion using bulk-density information. A comparison of four computational approaches (partition, mean, midpoint, strategic) for using bulk-density information to estimate total bottom-sediment mass in four large reservoirs indicated that the differences among the approaches were not statistically significant. However, the lack of statistical significance may be a result of the small sample size. Compared to the partition approach, which was presumed to provide the most accurate estimates of bottom-sediment mass, the results achieved using the strategic, mean, and midpoint approaches differed by as much as ?4, ?20, and ?44 percent, respectively. It was concluded that the strategic approach may merit further investigation as a less time consuming and less costly alternative to the partition approach.

  13. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  14. Dynamics of a low-density tiger population in Southeast Asia in the context of improved law enforcement.

    PubMed

    Duangchantrasiri, Somphot; Umponjan, Mayuree; Simcharoen, Saksit; Pattanavibool, Anak; Chaiwattana, Soontorn; Maneerat, Sompoch; Kumar, N Samba; Jathanna, Devcharan; Srivathsa, Arjun; Karanth, K Ullas

    2016-06-01

    Recovering small populations of threatened species is an important global conservation strategy. Monitoring the anticipated recovery, however, often relies on uncertain abundance indices rather than on rigorous demographic estimates. To counter the severe threat from poaching of wild tigers (Panthera tigris), the Government of Thailand established an intensive patrolling system in 2005 to protect and recover its largest source population in Huai Kha Khaeng Wildlife Sanctuary. Concurrently, we assessed the dynamics of this tiger population over the next 8 years with rigorous photographic capture-recapture methods. From 2006 to 2012, we sampled across 624-1026 km(2) with 137-200 camera traps. Cameras deployed for 21,359 trap days yielded photographic records of 90 distinct individuals. We used closed model Bayesian spatial capture-recapture methods to estimate tiger abundances annually. Abundance estimates were integrated with likelihood-based open model analyses to estimate rates of annual and overall rates of survival, recruitment, and changes in abundance. Estimates of demographic parameters fluctuated widely: annual density ranged from 1.25 to 2.01 tigers/100 km(2) , abundance from 35 to 58 tigers, survival from 79.6% to 95.5%, and annual recruitment from 0 to 25 tigers. The number of distinct individuals photographed demonstrates the value of photographic capture-recapture methods for assessments of population dynamics in rare and elusive species that are identifiable from natural markings. Possibly because of poaching pressure, overall tiger densities at Huai Kha Khaeng were 82-90% lower than in ecologically comparable sites in India. However, intensified patrolling after 2006 appeared to reduce poaching and was correlated with marginal improvement in tiger survival and recruitment. Our results suggest that population recovery of low-density tiger populations may be slower than anticipated by current global strategies aimed at doubling the number of wild tigers in a decade. © 2015 Society for Conservation Biology.

  15. A simple model to predict the biodiesel blend density as simultaneous function of blend percent and temperature.

    PubMed

    Gaonkar, Narayan; Vaidya, R G

    2016-05-01

    A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.

  16. High Precision 2-D Grating Groove Density Measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Ningxiao; McEntaffer, Randall; Tedesco, Ross

    2017-08-01

    Our research group at Penn State University is working on producing X-ray reflection gratings with high spectral resolving power and high diffraction efficiency. To estimate our fabrication accuracy, we apply a precise 2-D grating groove density measurement to plot groove density distributions of gratings on 6-inch wafers. In addition to plotting a fixed groove density distribution, this method is also sensitive to measuring the variation of the groove density simultaneously. This system can reach a measuring accuracy (ΔN/N) of 10-3. Here we present this groove density measurement and some applications.

  17. A geostatistical state-space model of animal densities for stream networks.

    PubMed

    Hocking, Daniel J; Thorson, James T; O'Neil, Kyle; Letcher, Benjamin H

    2018-06-21

    Population dynamics are often correlated in space and time due to correlations in environmental drivers as well as synchrony induced by individual dispersal. Many statistical analyses of populations ignore potential autocorrelations and assume that survey methods (distance and time between samples) eliminate these correlations, allowing samples to be treated independently. If these assumptions are incorrect, results and therefore inference may be biased and uncertainty under-estimated. We developed a novel statistical method to account for spatio-temporal correlations within dendritic stream networks, while accounting for imperfect detection in the surveys. Through simulations, we found this model decreased predictive error relative to standard statistical methods when data were spatially correlated based on stream distance and performed similarly when data were not correlated. We found that increasing the number of years surveyed substantially improved the model accuracy when estimating spatial and temporal correlation coefficients, especially from 10 to 15 years. Increasing the number of survey sites within the network improved the performance of the non-spatial model but only marginally improved the density estimates in the spatio-temporal model. We applied this model to Brook Trout data from the West Susquehanna Watershed in Pennsylvania collected over 34 years from 1981 - 2014. We found the model including temporal and spatio-temporal autocorrelation best described young-of-the-year (YOY) and adult density patterns. YOY densities were positively related to forest cover and negatively related to spring temperatures with low temporal autocorrelation and moderately-high spatio-temporal correlation. Adult densities were less strongly affected by climatic conditions and less temporally variable than YOY but with similar spatio-temporal correlation and higher temporal autocorrelation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Determinants of the reliability of ultrasound tomography sound speed estimates as a surrogate for volumetric breast density

    PubMed Central

    Khodr, Zeina G.; Sak, Mark A.; Pfeiffer, Ruth M.; Duric, Nebojsa; Littrup, Peter; Bey-Knight, Lisa; Ali, Haythem; Vallieres, Patricia; Sherman, Mark E.; Gierach, Gretchen L.

    2015-01-01

    Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density. PMID:26429241

  19. Determinants of the reliability of ultrasound tomography sound speed estimates as a surrogate for volumetric breast density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov

    Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (twomore » scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.« less

  20. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model.

    PubMed

    Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit

    2009-01-01

    Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.

  1. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  2. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  3. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Thermodynamically constrained correction to ab initio equations of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, Martin; Mattsson, Thomas R.

    2014-07-07

    We show how equations of state generated by density functional theory methods can be augmented to match experimental data without distorting the correct behavior in the high- and low-density limits. The technique is thermodynamically consistent and relies on knowledge of the density and bulk modulus at a reference state and an estimation of the critical density of the liquid phase. We apply the method to four materials representing different classes of solids: carbon, molybdenum, lithium, and lithium fluoride. It is demonstrated that the corrected equations of state for both the liquid and solid phases show a significantly reduced dependence ofmore » the exchange-correlation functional used.« less

  5. Vertical Scale Height of the Topside Ionosphere Around the Korean Peninsula: Estimates from Ionosondes and the Swarm Constellation

    NASA Astrophysics Data System (ADS)

    Park, Jaeheung; Kwak, Young-Sil; Mun, Jun-Chul; Min, Kyoung-Wook

    2015-12-01

    In this study, we estimated the topside scale height of plasma density (Hm) using the Swarm constellation and ionosondes in Korea. The Hm above Korean Peninsula is generally around 50 km. Statistical distributions of the topside scale height exhibited a complex dependence upon local time and season. The results were in general agreement with those of Tulasi Ram et al. (2009), who used the same method to calculate the topside scale height in a mid-latitude region. On the contrary, our results did not fully coincide with those obtained by Liu et al. (2007), who used electron density profiles from Arecibo Incoherent Scatter Radar (ISR) between 1966 and 2002. The disagreement may result from the limitations in our approximation method and data coverage used for estimations, as well as the inherent dependence of Hm on Geographic LONgitude (GLON).

  6. Multiparametric evaluation of hindlimb ischemia using time-series indocyanine green fluorescence imaging.

    PubMed

    Guang, Huizhi; Cai, Chuangjian; Zuo, Simin; Cai, Wenjuan; Zhang, Jiulou; Luo, Jianwen

    2017-03-01

    Peripheral arterial disease (PAD) can further cause lower limb ischemia. Quantitative evaluation of the vascular perfusion in the ischemic limb contributes to diagnosis of PAD and preclinical development of new drug. In vivo time-series indocyanine green (ICG) fluorescence imaging can noninvasively monitor blood flow and has a deep tissue penetration. The perfusion rate estimated from the time-series ICG images is not enough for the evaluation of hindlimb ischemia. The information relevant to the vascular density is also important, because angiogenesis is an essential mechanism for post-ischemic recovery. In this paper, a multiparametric evaluation method is proposed for simultaneous estimation of multiple vascular perfusion parameters, including not only the perfusion rate but also the vascular perfusion density and the time-varying ICG concentration in veins. The target method is based on a mathematical model of ICG pharmacokinetics in the mouse hindlimb. The regression analysis performed on the time-series ICG images obtained from a dynamic reflectance fluorescence imaging system. The results demonstrate that the estimated multiple parameters are effective to quantitatively evaluate the vascular perfusion and distinguish hypo-perfused tissues from well-perfused tissues in the mouse hindlimb. The proposed multiparametric evaluation method could be useful for PAD diagnosis. The estimated perfusion rate and vascular perfusion density maps (left) and the time-varying ICG concentration in veins of the ankle region (right) of the normal and ischemic hindlimbs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. KERNELHR: A program for estimating animal home ranges

    USGS Publications Warehouse

    Seaman, D.E.; Griffith, B.; Powell, R.A.

    1998-01-01

    Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.

  8. Dynamic dual-tracer MRI-guided fluorescence tomography to quantify receptor density in vivo

    PubMed Central

    Davis, Scott C.; Samkoe, Kimberley S.; Tichauer, Kenneth M.; Sexton, Kristian J.; Gunn, Jason R.; Deharvengt, Sophie J.; Hasan, Tayyaba; Pogue, Brian W.

    2013-01-01

    The up-regulation of cell surface receptors has become a central focus in personalized cancer treatment; however, because of the complex nature of contrast agent pharmacokinetics in tumor tissue, methods to quantify receptor binding in vivo remain elusive. Here, we present a dual-tracer optical technique for noninvasive estimation of specific receptor binding in cancer. A multispectral MRI-coupled fluorescence molecular tomography system was used to image the uptake kinetics of two fluorescent tracers injected simultaneously, one tracer targeted to the receptor of interest and the other tracer a nontargeted reference. These dynamic tracer data were then fit to a dual-tracer compartmental model to estimate the density of receptors available for binding in the tissue. Applying this approach to mice with deep-seated gliomas that overexpress the EGF receptor produced an estimate of available receptor density of 2.3 ± 0.5 nM (n = 5), consistent with values estimated in comparative invasive imaging and ex vivo studies. PMID:23671066

  9. Accurate bulk density determination of irregularly shaped translucent and opaque aerogels

    NASA Astrophysics Data System (ADS)

    Petkov, M. P.; Jones, S. M.

    2016-05-01

    We present a volumetric method for accurate determination of bulk density of aerogels, calculated from extrapolated weight of the dry pure solid and volume estimates based on the Archimedes' principle of volume displacement, using packed 100 μm-sized monodispersed glass spheres as a "quasi-fluid" media. Hard particle packing theory is invoked to demonstrate the reproducibility of the apparent density of the quasi-fluid. Accuracy rivaling that of the refractive index method is demonstrated for both translucent and opaque aerogels with different absorptive properties, as well as for aerogels with regular and irregular shapes.

  10. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  11. Aging adult skull remains through radiological density estimates: A comparison of different computed tomography systems and the use of computer simulations to judge the accuracy of results.

    PubMed

    Obert, Martin; Kubelt, Carolin; Schaaf, Thomas; Dassinger, Benjamin; Grams, Astrid; Gizewski, Elke R; Krombach, Gabriele A; Verhoff, Marcel A

    2013-05-10

    The objective of this article was to explore age-at-death estimates in forensic medicine, which were methodically based on age-dependent, radiologically defined bone-density (HC) decay and which were investigated with a standard clinical computed tomography (CT) system. Such density decay was formerly discovered with a high-resolution flat-panel CT in the skulls of adult females. The development of a standard CT methodology for age estimations--with thousands of installations--would have the advantage of being applicable everywhere, whereas only few flat-panel prototype CT systems are in use worldwide. A Multi-Slice CT scanner (MSCT) was used to obtain 22,773 images from 173 European human skulls (89 male, 84 female), taken from a population of patients from the Department of Neuroradiology at the University Hospital Giessen and Marburg during 2010 and 2011. An automated image analysis was carried out to evaluate HC of all images. The age dependence of HC was studied by correlation analysis. The prediction accuracy of age-at-death estimates was calculated. Computer simulations were carried out to explore the influence of noise on the accuracy of age predictions. Human skull HC values strongly scatter as a function of age for both sexes. Adult male skull bone-density remains constant during lifetime. Adult female HC decays during lifetime, as indicated by a correlation coefficient (CC) of -0.53. Prediction errors for age-at-death estimates for both of the used scanners are in the range of ±18 years at a 75% confidence interval (CI). Computer simulations indicate that this is the best that can be expected for such noisy data. Our results indicate that HC-decay is indeed present in adult females and that it can be demonstrated both by standard and by high-resolution CT methods, applied to different subject groups of an identical population. The weak correlation between HC and age found by both CT methods only enables a method to estimate age-at-death with limited practical relevance since the errors of the estimates are large. Computer simulations clearly indicate that data with less noise and CCs in the order of -0.97 or less would be necessary to enable age-at-death estimates with an accuracy of ±5 years at a 75% CI. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Neural network evaluation of reflectometry density profiles for control purposes

    NASA Astrophysics Data System (ADS)

    Santos, J.; Nunes, F.; Manso, M.; Nunes, I.

    1999-01-01

    Broadband reflectometry is a diagnostic that is able to measure the density profile with high spatial and temporal resolutions, therefore it can be used to improve the performance of advanced tokamak operation modes and to supplement or correct the magnetics for plasma position control. To perform these tasks real-time processing is needed. Here we present a method that uses a neural network to make a fast evaluation of radial positions for selected density layers. Typical ASDEX Upgrade density profiles were used to generate the simulated network training and test sets. It is shown that the method has the potential to meet the tight timing requirements of control applications with the required accuracy. The network is also able to provide an accurate estimation of the position of density layers below the first density layer which is probed by an O-mode reflectometer, provided that it is trained with a realistic density profile model.

  13. Sampling methods, dispersion patterns, and fixed precision sequential sampling plans for western flower thrips (Thysanoptera: Thripidae) and cotton fleahoppers (Hemiptera: Miridae) in cotton.

    PubMed

    Parajulee, M N; Shrestha, R B; Leser, J F

    2006-04-01

    A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.

  14. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  15. Determining Core Plasmaspheric Electron Densities with the Van Allen Probes

    NASA Astrophysics Data System (ADS)

    De Pascuale, S.; Hartley, D.; Kurth, W. S.; Kletzing, C.; Thaller, S. A.; Wygant, J. R.

    2016-12-01

    We survey three methods for obtaining electron densities inside of the core plasmasphere region (L < 4) to the perigee of the Van Allen Probes (L 1.1) from September 2012 to December 2014. Using the EMFISIS instrument on board the Van Allen Probes, electron densities are extracted from the upper hybrid resonance to an uncertainty of 10%. Some measurements are subject to larger errors given interpretational issues, especially at low densities (L > 4) resulting from geomagnetic activity. At high densities EMFISIS is restricted by an upper observable limit near 3000 cm-3. As this limit is encountered above perigee, we employ two additional methods validated against EMFISIS measurements to determine electron densities deep within the plasmasphere (L < 2). EMFISIS can extrapolate density estimates to lower L by calculating high densities, in good agreement with the upper hybrid technique when applicable, from plasma wave properties. Calibrated measurements, from the Van Allen Probes EFW potential instrument, also extend into this range. In comparison with the published EMFISIS database we provide a metric for the validity of core plasmaspheric density measurements obtained from these methods and an empirical density model for use in wave and particle simulations.

  16. Comparing Visually Assessed BI-RADS Breast Density and Automated Volumetric Breast Density Software: A Cross-Sectional Study in a Breast Cancer Screening Setting

    PubMed Central

    van der Waal, Daniëlle; den Heeten, Gerard J.; Pijnappel, Ruud M.; Schuur, Klaas H.; Timmers, Johanna M. H.; Verbeek, André L. M.; Broeders, Mireille J. M.

    2015-01-01

    Introduction The objective of this study is to compare different methods for measuring breast density, both visual assessments and automated volumetric density, in a breast cancer screening setting. These measures could potentially be implemented in future screening programmes, in the context of personalised screening or screening evaluation. Materials and Methods Digital mammographic exams (N = 992) of women participating in the Dutch breast cancer screening programme (age 50–75y) in 2013 were included. Breast density was measured in three different ways: BI-RADS density (5th edition) and with two commercially available automated software programs (Quantra and Volpara volumetric density). BI-RADS density (ordinal scale) was assessed by three radiologists. Quantra (v1.3) and Volpara (v1.5.0) provide continuous estimates. Different comparison methods were used, including Bland-Altman plots and correlation coefficients (e.g., intraclass correlation coefficient [ICC]). Results Based on the BI-RADS classification, 40.8% of the women had ‘heterogeneously or extremely dense’ breasts. The median volumetric percent density was 12.1% (IQR: 9.6–16.5) for Quantra, which was higher than the Volpara estimate (median 6.6%, IQR: 4.4–10.9). The mean difference between Quantra and Volpara was 5.19% (95% CI: 5.04–5.34) (ICC: 0.64). There was a clear increase in volumetric percent dense volume as BI-RADS density increased. The highest accuracy for predicting the presence of BI-RADS c+d (heterogeneously or extremely dense) was observed with a cut-off value of 8.0% for Volpara and 13.8% for Quantra. Conclusion Although there was no perfect agreement, there appeared to be a strong association between all three measures. Both volumetric density measures seem to be usable in breast cancer screening programmes, provided that the required data flow can be realized. PMID:26335569

  17. Population estimate of Chinese mystery snail (Bellamya chinensis) in a Nebraska reservoir

    USGS Publications Warehouse

    Chaine, Noelle M.; Allen, Craig R.; Fricke, Kent A.; Haak, Danielle M.; Hellman, Michelle L.; Kill, Robert A.; Nemec, Kristine T.; Pope, Kevin L.; Smeenk, Nicholas A.; Stephen, Bruce J.; Uden, Daniel R.; Unstad, Kody M.; VanderHam, Ashley E.

    2012-01-01

    The Chinese mystery snail (Bellamya chinensis) is an aquatic invasive species in North America. Little is known regarding this species' impacts on freshwater ecosystems. It is be lieved that population densities can be high, yet no population estimates have been reported. We utilized a mark-recapture approach to generate a population estimate for Chinese mystery snail in Wild Plum Lake, a 6.47-ha reservoir in southeast Nebraska. We calculated, using bias-adjusted Lincoln-Petersen estimation, that there were approximately 664 adult snails within a 127 m2 transect (5.2 snails/m2). If this density was consistent throughout the littoral zone (<3 m in depth) of the reservoir, then the total adult population in this impoundment is estimated to be 253,570 snails, and the total Chinese mystery snail wet biomass is estimated to be 3,119 kg (643 kg/ha). If this density is confined to the depth sampled in this study (1.46 m), then the adult population is estimated to be 169,400 snails, and wet biomass is estimated to be 2,084 kg (643 kg/ha). Additional research is warranted to further test the utility of mark-recapture methods for aquatic snails and to better understand Chinese mystery snail distributions within reservoirs.

  18. HISTORICAL ANALYSIS OF THE RELATIONSHIP OF STREAMFLOW FLASHINESS WITH POPULATION DENSITY, IMPERVIOUSNESS, AND PERCENT URBAN LAND COVER IN THE MID-ATLANTIC REGION

    EPA Science Inventory

    Methods: This study is an examination of the relationship between stream flashiness and watershed-scale estimates of percent imperviousness, degree of urban development, and population density for 150 watersheds with long-term USGS National Water Information System (NWIS) histori...

  19. Integrating resource selection into spatial capture-recapture models for large carnivores

    Treesearch

    K. M. Proffitt; J. F. Goldberg; M. Hebblewhite; R. Russell; B. S. Jimenez; H. S. Robinson; Kristine Pilgrim; Michael Schwartz

    2015-01-01

    Wildlife managers need reliable methods to estimate large carnivore densities and population trends; yet large carnivores are elusive, difficult to detect, and occur at low densities making traditional approaches intractable. Recent advances in spatial capture-recapture (SCR) models have provided new approaches for monitoring trends in wildlife abundance and...

  20. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    PubMed

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  1. Enhanced local tomography

    DOEpatents

    Katsevich, Alexander J.; Ramm, Alexander G.

    1996-01-01

    Local tomography is enhanced to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. In a first method for evaluating the value of the discontinuity, the relative attenuation data is inputted to a local tomography function .function..sub..LAMBDA. to define the location S of the density discontinuity. The asymptotic behavior of .function..sub..LAMBDA. is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA.. In a second method for evaluating the value of the discontinuity, a gradient value for a mollified local tomography function .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) is determined along the discontinuity; and the value of the jump of the density across the discontinuity curve (or surface) S is estimated from the gradient values.

  2. Use of burrow entrances to indicate densities of Townsend's ground squirrels

    USGS Publications Warehouse

    Van Horne, Beatrice; Schooley, Robert L.; Knick, Steven T.; Olson, G.S.; Burnham, K.P.

    1997-01-01

    Counts of burrow entrances have been positively correlated with densities of semi-fossorial rodents and used as an index of densities. We evaluated their effectiveness in indexing densities of Townsend's ground squirrels (Spermophilus townsendii) in the Snake River Birds of Prey National Conservation Area (SRBOPNCA), Idaho, by comparing burrow entrance densities to densities of ground squirrels estimated from livetrapping in 2 consecutive years over which squirrel populations declined by >75%. We did not detect a consistent relation between burrow entrance counts and ground squirrel density estimates within or among habitat types. Scatter plots indicated that burrow entrances had little predictive power at intermediate densities. Burrow entrance counts did not reflect the magnitude of a between-year density decline. Repeated counts of entrances late in the squirrels' active season varied in a manner that would be difficult to use for calibration of transects sampled only once during this period. Annual persistence of burrow entrances varied between habitats. Trained observers were inconsistent in assigning active-inactive status to entrances. We recommend that burrow entrance counts not be used as measures or indices of ground squirrel densities in shrubsteppe habitats, and that the method be verified thoroughly before being used in other habitats.

  3. A variable circular-plot method for estimated bird numbers

    USGS Publications Warehouse

    Reynolds, R.T.; Scott, J.M.; Nussbaum, R.A.

    1980-01-01

    A bird census method is presented that is designed for tall, structurally complex vegetation types, and rugged terrain. With this method the observer counts all birds seen or heard around a station, and estimates the horizontal distance from the station to each bird. Count periods at stations vary according to the avian community and structural complexity of the vegetation. The density of each species is determined by inspecting a histogram of the number of individuals per unit area in concentric bands of predetermined widths about the stations, choosing the band (with outside radius x) where the density begins to decline, and summing the number of individuals counted within the circle of radius x and dividing by the area (Bx2). Although all observations beyond radius x are rejected with this procedure, coefficients of maximum distance.

  4. Log sampling methods and software for stand and landscape analyses.

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  5. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  6. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  7. Numerical approach for ECT by using boundary element method with Laplace transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enokizono, M.; Todaka, T.; Shibao, K.

    1997-03-01

    This paper presents an inverse analysis by using BEM with Laplace transform. The method is applied to a simple problem in the eddy current testing (ECT). Some crack shapes in a conductive specimen are estimated from distributions of the transient eddy current on its sensing surface and magnetic flux density in the liftoff space. Because the transient behavior includes information on various frequency components, the method is applicable to the shape estimation of a comparative small crack.

  8. Modeling the Response of Anopheles gambiae (Diptera: Culicidae) Populations in the Kenya Highlands to a Rise in Mean Annual Temperature.

    PubMed

    Wallace, Dorothy; Prosper, Olivia; Savos, Jacob; Dunham, Ann M; Chipman, Jonathan W; Shi, Xun; Ndenga, Bryson; Githeko, Andrew

    2017-03-01

    A dynamical model of Anopheles gambiae larval and adult populations is constructed that matches temperature-dependent maturation times and mortality measured experimentally as well as larval instar and adult mosquito emergence data from field studies in the Kenya Highlands. Spectral classification of high-resolution satellite imagery is used to estimate household density. Indoor resting densities collected over a period of one year combined with predictions of the dynamical model give estimates of both aquatic habitat and total adult mosquito densities. Temperature and precipitation patterns are derived from monthly records. Precipitation patterns are compared with average and extreme habitat estimates to estimate available aquatic habitat in an annual cycle. These estimates are coupled with the original model to produce estimates of adult and larval populations dependent on changing aquatic carrying capacity for larvae and changing maturation and mortality dependent on temperature. This paper offers a general method for estimating the total area of aquatic habitat in a given region, based on larval counts, emergence rates, indoor resting density data, and number of households.Altering the average daily temperature and the average daily rainfall simulates the effect of climate change on annual cycles of prevalence of An. gambiae adults. We show that small increases in average annual temperature have a large impact on adult mosquito density, whether measured at model equilibrium values for a single square meter of habitat or tracked over the course of a year of varying habitat availability and temperature. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Flow Cytometry Pulse Width Data Enables Rapid and Sensitive Estimation of Biomass Dry Weight in the Microalgae Chlamydomonas reinhardtii and Chlorella vulgaris

    PubMed Central

    Chioccioli, Maurizio; Hankamer, Ben; Ross, Ian L.

    2014-01-01

    Dry weight biomass is an important parameter in algaculture. Direct measurement requires weighing milligram quantities of dried biomass, which is problematic for small volume systems containing few cells, such as laboratory studies and high throughput assays in microwell plates. In these cases indirect methods must be used, inducing measurement artefacts which vary in severity with the cell type and conditions employed. Here, we utilise flow cytometry pulse width data for the estimation of cell density and biomass, using Chlorella vulgaris and Chlamydomonas reinhardtii as model algae and compare it to optical density methods. Measurement of cell concentration by flow cytometry was shown to be more sensitive than optical density at 750 nm (OD750) for monitoring culture growth. However, neither cell concentration nor optical density correlates well to biomass when growth conditions vary. Compared to the growth of C. vulgaris in TAP (tris-acetate-phosphate) medium, cells grown in TAP + glucose displayed a slowed cell division rate and a 2-fold increased dry biomass accumulation compared to growth without glucose. This was accompanied by increased cellular volume. Laser scattering characteristics during flow cytometry were used to estimate cell diameters and it was shown that an empirical but nonlinear relationship could be shown between flow cytometric pulse width and dry weight biomass per cell. This relationship could be linearised by the use of hypertonic conditions (1 M NaCl) to dehydrate the cells, as shown by density gradient centrifugation. Flow cytometry for biomass estimation is easy to perform, sensitive and offers more comprehensive information than optical density measurements. In addition, periodic flow cytometry measurements can be used to calibrate OD750 measurements for both convenience and accuracy. This approach is particularly useful for small samples and where cellular characteristics, especially cell size, are expected to vary during growth. PMID:24832156

  10. On-site monitoring of atomic density number for an all-optical atomic magnetometer based on atomic spin exchange relaxation.

    PubMed

    Zhang, Hong; Zou, Sheng; Chen, Xiyuan; Ding, Ming; Shan, Guangcun; Hu, Zhaohui; Quan, Wei

    2016-07-25

    We present a method for monitoring the atomic density number on site based on atomic spin exchange relaxation. When the spin polarization P ≪ 1, the atomic density numbers could be estimated by measuring magnetic resonance linewidth in an applied DC magnetic field by using an all-optical atomic magnetometer. The density measurement results showed that the experimental results the theoretical predictions had a good consistency in the investigated temperature range from 413 K to 463 K, while, the experimental results were approximately 1.5 ∼ 2 times less than the theoretical predictions estimated from the saturated vapor pressure curve. These deviations were mainly induced by the radiative heat transfer efficiency, which inevitably leaded to a lower temperature in cell than the setting temperature.

  11. Comparison of visual survey and seining methods for estimating abundance of an endangered, benthic stream fish

    USGS Publications Warehouse

    Jordan, F.; Jelks, H.L.; Bortone, S.A.; Dorazio, R.M.

    2008-01-01

    We compared visual survey and seining methods for estimating abundance of endangered Okaloosa darters, Etheostoma okaloosae, in 12 replicate stream reaches during August 2001. For each 20-m stream reach, two divers systematically located and marked the position of darters and then a second crew of three to five people came through with a small-mesh seine and exhaustively sampled the same area. Visual surveys required little extra time to complete. Visual counts (24.2 ?? 12.0; mean ?? one SD) considerably exceeded seine captures (7.4 ?? 4.8), and counts from the two methods were uncorrelated. Visual surveys, but not seines, detected the presence of Okaloosa darters at one site with low population densities. In 2003, we performed a depletion removal study in 10 replicate stream reaches to assess the accuracy of the visual survey method. Visual surveys detected 59% of Okaloosa darters present, and visual counts and removal estimates were positively correlated. Taken together, our comparisons indicate that visual surveys more accurately and precisely estimate abundance of Okaloosa darters than seining and more reliably detect presence at low population densities. We recommend evaluation of visual survey methods when designing programs to monitor abundance of benthic fishes in clear streams, especially for threatened and endangered species that may be sensitive to handling and habitat disturbance. ?? 2007 Springer Science+Business Media, Inc.

  12. Waste rice seed in conventional and stripper-head harvested fields in California: Implications for wintering waterfowl

    USGS Publications Warehouse

    Fleskes, Joseph P.; Halstead, Brian J.; Casazza, Michael L.; Coates, Peter S.; Kohl, Jeffrey D.; Skalos, Daniel A.

    2012-01-01

    Waste rice seed is an important food for wintering waterfowl and current estimates of its availability are needed to determine the carrying capacity of rice fields and guide habitat conservation. We used a line-intercept method to estimate mass-density of rice seed remaining after harvest during 2010 in the Sacramento Valley (SACV) of California and compared results with estimates from previous studies in the SACV and Mississippi Alluvial Valley (MAV). Posterior mean (95% credible interval) estimates of total waste rice seed mass-density for the SACV in 2010 were 388 (336–449) kg/ha in conventionally harvested fields and 245 (198–307) kg/ha in stripper-head harvested fields; the 2010 mass-density is nearly identical to the mid-1980s estimate for conventionally harvested fields but 36% lower than the mid-1990s estimate for stripped fields. About 18% of SACV fields were stripper-head harvested in 2010 vs. 9–15% in the mid-1990s and 0% in the mid-1980s; but due to a 50% increase in planted rice area, total mass of waste rice seed in SACV remaining after harvest in 2010 was 43% greater than in the mid-1980s. However, total mass of seed-eating waterfowl also increased 82%, and the ratio of waste rice seed to seed-eating waterfowl mass was 21% smaller in 2010 than in the mid-1980s. Mass-densities of waste rice remaining after harvest in SACV fields are within the range reported for MAV fields. However, because there is a lag between harvest and waterfowl use in the MAV but not in the SACV, seed loss is greater in the MAV and estimated waste seed mass-density available to wintering waterfowl in SACV fields is about 5–30 times recent MAV estimates. Waste rice seed remains an abundant food source for waterfowl wintering in the SACV, but increased use of stripper-head harvesters would reduce this food. To provide accurate data on carrying capacities of rice fields necessary for conservation planning, trends in planted rice area, harvest method, and postharvest field treatment should be tracked and impacts of postharvest field treatment and other farming practices on waste rice seed availability should be investigated.

  13. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.

    PubMed

    Rad, Kamiar Rahnama; Paninski, Liam

    2010-01-01

    Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.

  14. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  15. Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio

    We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex  = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less

  16. Comparison of point counts and territory mapping for detecting effects of forest management on songbirds

    USGS Publications Warehouse

    Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently

    2013-01-01

    Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.

  17. Nowcasting Cloud Fields for U.S. Air Force Special Operations

    DTIC Science & Technology

    2017-03-01

    application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES

  18. Volumes and bulk densities of forty asteroids from ADAM shape modeling

    NASA Astrophysics Data System (ADS)

    Hanuš, J.; Viikinkoski, M.; Marchis, F.; Ďurech, J.; Kaasalainen, M.; Delbo', M.; Herald, D.; Frappa, E.; Hayamizu, T.; Kerr, S.; Preston, S.; Timerson, B.; Dunham, D.; Talbot, J.

    2017-05-01

    Context. Disk-integrated photometric data of asteroids do not contain accurate information on shape details or size scale. Additional data such as disk-resolved images or stellar occultation measurements further constrain asteroid shapes and allow size estimates. Aims: We aim to use all the available disk-resolved images of approximately forty asteroids obtained by the Near-InfraRed Camera (Nirc2) mounted on the W.M. Keck II telescope together with the disk-integrated photometry and stellar occultation measurements to determine their volumes. We can then use the volume, in combination with the known mass, to derive the bulk density. Methods: We downloaded and processed all the asteroid disk-resolved images obtained by the Nirc2 that are available in the Keck Observatory Archive (KOA). We combined optical disk-integrated data and stellar occultation profiles with the disk-resolved images and use the All-Data Asteroid Modeling (ADAM) algorithm for the shape and size modeling. Our approach provides constraints on the expected uncertainty in the volume and size as well. Results: We present shape models and volume for 41 asteroids. For 35 of these asteroids, the knowledge of their mass estimates from the literature allowed us to derive their bulk densities. We see a clear trend of lower bulk densities for primitive objects (C-complex) and higher bulk densities for S-complex asteroids. The range of densities in the X-complex is large, suggesting various compositions. We also identified a few objects with rather peculiar bulk densities, which is likely a hint of their poor mass estimates. Asteroid masses determined from the Gaia astrometric observations should further refine most of the density estimates.

  19. A microwave method for measuring moisture content, density, and grain angle of wood

    Treesearch

    W. L. James; Y.-H. Yen; R. J. King

    1985-01-01

    The attenuation, phase shift and depolarization of a polarized 4.81-gigahertz wave as it is transmitted through a wood specimen can provide estimates of the moisture content (MC), density, and grain angle of the specimen. Calibrations are empirical, and computations are complicated, with considerable interaction between parameters. Measured dielectric parameters,...

  20. Moments of the phase-space density, coincidence probabilities, and entropies of a multiparticle system

    NASA Astrophysics Data System (ADS)

    Bialas, A.

    2006-04-01

    A method to estimate moments of the phase-space density from event-by-event fluctuations is reviewed and its accuracy analyzed. Relation of these measurements to the determination of the entropy of the system is discussed. This is a summary of the results obtained recently together with W.Czyz and K.Zalewski.

  1. Study of compact radio sources using interplanetary scintillations at 111 MHz. The Pearson-Readhead sample

    NASA Astrophysics Data System (ADS)

    Tyul'Bashev, S. A.

    2009-01-01

    A complete sample of radio sources has been studied using the interplanetary scintillation method. In total, 32 sources were observed, with scintillations detected in 12 of them. The remaining sources have upper limits for the flux densities of their compact components. Integrated flux densities are estimated for 18 sources.

  2. Robust statistical reconstruction for charged particle tomography

    DOEpatents

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  3. Hybrid asymptotic-numerical approach for estimating first-passage-time densities of the two-dimensional narrow capture problem.

    PubMed

    Lindsay, A E; Spoonmore, R T; Tzou, J C

    2016-10-01

    A hybrid asymptotic-numerical method is presented for obtaining an asymptotic estimate for the full probability distribution of capture times of a random walker by multiple small traps located inside a bounded two-dimensional domain with a reflecting boundary. As motivation for this study, we calculate the variance in the capture time of a random walker by a single interior trap and determine this quantity to be comparable in magnitude to the mean. This implies that the mean is not necessarily reflective of typical capture times and that the full density must be determined. To solve the underlying diffusion equation, the method of Laplace transforms is used to obtain an elliptic problem of modified Helmholtz type. In the limit of vanishing trap sizes, each trap is represented as a Dirac point source that permits the solution of the transform equation to be represented as a superposition of Helmholtz Green's functions. Using this solution, we construct asymptotic short-time solutions of the first-passage-time density, which captures peaks associated with rapid capture by the absorbing traps. When numerical evaluation of the Helmholtz Green's function is employed followed by numerical inversion of the Laplace transform, the method reproduces the density for larger times. We demonstrate the accuracy of our solution technique with a comparison to statistics obtained from a time-dependent solution of the diffusion equation and discrete particle simulations. In particular, we demonstrate that the method is capable of capturing the multimodal behavior in the capture time density that arises when the traps are strategically arranged. The hybrid method presented can be applied to scenarios involving both arbitrary domains and trap shapes.

  4. Analytical methods for measuring the parameters of interstellar gas using methanol observations

    NASA Astrophysics Data System (ADS)

    Kalenskii, S. V.; Kurtz, S.

    2016-08-01

    The excitation of methanol in the absence of external radiation is analyzed, and LTE methods for probing interstellar gas considered. It is shown that rotation diagrams correctly estimate the gas kinetic temperature only if they are constructed using lines whose upper levels are located in the same K-ladders, such as the J 0- J -1 E lines at 157 GHz, the J 1- J 0 E lines at 165 GHz, and the J 2- J 1 E lines at 25 GHz. The gas density must be no less than 107 cm-3. Rotation diagrams constructed from lines with different K values for their upper levels (e.g., 2 K -1 K at 96 GHz, 3 K -2 K at 145 GHz, 5 K -4 K at 241 GHz) significantly underestimate the temperature, but enable estimation of the density. In addition, diagrams based on the 2 K -1 K lines can be used to estimate the methanol column density within a factor of about two to five. It is suggested that rotation diagrams should be used in the following manner. First, two rotation diagrams should be constructed, one from the lines at 96, 145, or 241 GHz, and another from the lines at 157, 165, or 25 GHz. The former diagram is used to estimate the gas density. If the density is about 107 cm-3 or higher, the latter diagram reproduces the temperature fairly well. If the density is around 106 cm-3, the temperature obtained from the latter diagram should be multiplied by a factor of 1.5-2. If the density is about 105 cm-3 or lower, then the latter diagram yields a temperature that is lower than the kinetic temperature by a factor of three or more, and should be used only as a lower limit for the kinetic temperature. The errors in the methanol column density determined from the integrated intensity of a single line can be more than an order of magnitude, even when the gas temperature is well known. However, if the J 0-( J - 1)0 E lines, as well as the J 1-( J - 1)1 A + or A - lines are used, the relative error in the column density is no more than a factor of a few.

  5. Postmortem validation of breast density using dual-energy mammography

    PubMed Central

    Molloi, Sabee; Ducote, Justin L.; Ding, Huanjun; Feig, Stephen A.

    2014-01-01

    Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decomposition was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer. PMID:25086548

  6. Postmortem validation of breast density using dual-energy mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molloi, Sabee, E-mail: symolloi@uci.edu; Ducote, Justin L.; Ding, Huanjun

    2014-08-15

    Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decompositionmore » was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer.« less

  7. High-resolution proxies for wood density variations in Terminalia superba

    PubMed Central

    De Ridder, Maaike; Van den Bulcke, Jan; Vansteenkiste, Dries; Van Loo, Denis; Dierick, Manuel; Masschaele, Bert; De Witte, Yoni; Mannes, David; Lehmann, Eberhard; Beeckman, Hans; Van Hoorebeke, Luc; Van Acker, Joris

    2011-01-01

    Background and Aims Density is a crucial variable in forest and wood science and is evaluated by a multitude of methods. Direct gravimetric methods are mostly destructive and time-consuming. Therefore, faster and semi- to non-destructive indirect methods have been developed. Methods Profiles of wood density variations with a resolution of approx. 50 µm were derived from one-dimensional resistance drillings, two-dimensional neutron scans, and three-dimensional neutron and X-ray scans. All methods were applied on Terminalia superba Engl. & Diels, an African pioneer species which sometimes exhibits a brown heart (limba noir). Key Results The use of X-ray tomography combined with a reference material permitted direct estimates of wood density. These X-ray-derived densities overestimated gravimetrically determined densities non-significantly and showed high correlation (linear regression, R2 = 0·995). When comparing X-ray densities with the attenuation coefficients of neutron scans and the amplitude of drilling resistance, a significant linear relation was found with the neutron attenuation coefficient (R2 = 0·986) yet a weak relation with drilling resistance (R2 = 0·243). When density patterns are compared, all three methods are capable of revealing the same trends. Differences are mainly due to the orientation of tree rings and the different characteristics of the indirect methods. Conclusions High-resolution X-ray computed tomography is a promising technique for research on wood cores and will be explored further on other temperate and tropical species. Further study on limba noir is necessary to reveal the causes of density variations and to determine how resistance drillings can be further refined. PMID:21131386

  8. Gaussian windows: A tool for exploring multivariate data

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1990-01-01

    Presented here is a method for interactively exploring a large set of quantitative multivariate data, in order to estimate the shape of the underlying density function. It is assumed that the density function is more or less smooth, but no other specific assumptions are made concerning its structure. The local structure of the data in a given region may be examined by viewing the data through a Gaussian window, whose location and shape are chosen by the user. A Gaussian window is defined by giving each data point a weight based on a multivariate Gaussian function. The weighted sample mean and sample covariance matrix are then computed, using the weights attached to the data points. These quantities are used to compute an estimate of the shape of the density function in the window region. The local structure of the data is described by a method similar to the method of principal components. By taking many such local views of the data, we can form an idea of the structure of the data set. The method is applicable in any number of dimensions. The method can be used to find and describe simple structural features such as peaks, valleys, and saddle points in the density function, and also extended structures in higher dimensions. With some practice, we can apply our geometrical intuition to these structural features in any number of dimensions, so that we can think about and describe the structure of the data. Since the computations involved are relatively simple, the method can easily be implemented on a small computer.

  9. State of charge monitoring of vanadium redox flow batteries using half cell potentials and electrolyte density

    NASA Astrophysics Data System (ADS)

    Ressel, Simon; Bill, Florian; Holtz, Lucas; Janshen, Niklas; Chica, Antonio; Flower, Thomas; Weidlich, Claudia; Struckmann, Thorsten

    2018-02-01

    The operation of vanadium redox flow batteries requires reliable in situ state of charge (SOC) monitoring. In this study, two SOC estimation approaches for the negative half cell are investigated. First, in situ open circuit potential measurements are combined with Coulomb counting in a one-step calibration of SOC and Nernst potential which doesn't need additional reference SOCs. In-sample and out-of-sample SOCs are estimated and analyzed, estimation errors ≤ 0.04 are obtained. In the second approach, temperature corrected in situ electrolyte density measurements are used for the first time in vanadium redox flow batteries for SOC estimation. In-sample and out-of-sample SOC estimation errors ≤ 0.04 demonstrate the feasibility of this approach. Both methods allow recalibration during battery operation. The actual capacity obtained from SOC calibration can be used in a state of health model.

  10. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  11. Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin

    2016-10-01

    An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.

  12. Indirect measurement of lung density and air volume from electrical impedance tomography (EIT) data.

    PubMed

    Nebuya, Satoru; Mills, Gary H; Milnes, Peter; Brown, Brian H

    2011-12-01

    This paper describes a method for estimating lung density, air volume and changes in fluid content from a non-invasive measurement of the electrical resistivity of the lungs. Resistivity in Ω m was found by fitting measured electrical impedance tomography (EIT) data to a finite difference model of the thorax. Lung density was determined by comparing the resistivity of the lungs, measured at a relatively high frequency, with values predicted from a published model of lung structure. Lung air volume can then be calculated if total lung weight is also known. Temporal changes in lung fluid content will produce proportional changes in lung density. The method was implemented on EIT data, collected using eight electrodes placed in a single plane around the thorax, from 46 adult male subjects and 36 adult female subjects. Mean lung densities (±SD) of 246 ± 67 and 239 ± 64 kg m(-3), respectively, were obtained. In seven adult male subjects estimates of 1.68 ± 0.30, 3.42 ± 0.49 and 4.40 ± 0.53 l in residual volume, functional residual capacity and vital capacity, respectively, were obtained. Sources of error are discussed. It is concluded that absolute differences in lung density of about 30% and changes over time of less than 30% should be detected using the current technology in normal subjects. These changes would result from approximately 300 ml increase in lung fluid. The method proposed could be used for non-invasive monitoring of total lung air and fluid content in normal subjects but needs to be assessed in patients with lung disease.

  13. Euphausiid distribution along the Western Antarctic Peninsula—Part A: Development of robust multi-frequency acoustic techniques to identify euphausiid aggregations and quantify euphausiid size, abundance, and biomass

    NASA Astrophysics Data System (ADS)

    Lawson, Gareth L.; Wiebe, Peter H.; Stanton, Timothy K.; Ashjian, Carin J.

    2008-02-01

    Methods were refined and tested for identifying the aggregations of Antarctic euphausiids ( Euphausia spp.) and then estimating euphausiid size, abundance, and biomass, based on multi-frequency acoustic survey data. A threshold level of volume backscattering strength for distinguishing euphausiid aggregations from other zooplankton was derived on the basis of published measurements of euphausiid visual acuity and estimates of the minimum density of animals over which an individual can maintain visual contact with its nearest neighbor. Differences in mean volume backscattering strength at 120 and 43 kHz further served to distinguish euphausiids from other sources of scattering. An inversion method was then developed to estimate simultaneously the mean length and density of euphausiids in these acoustically identified aggregations based on measurements of mean volume backscattering strength at four frequencies (43, 120, 200, and 420 kHz). The methods were tested at certain locations within an acoustically surveyed continental shelf region in and around Marguerite Bay, west of the Antarctic Peninsula, where independent evidence was also available from net and video systems. Inversion results at these test sites were similar to net samples for estimated length, but acoustic estimates of euphausiid density exceeded those from nets by one to two orders of magnitude, likely due primarily to avoidance and to a lesser extent to differences in the volumes sampled by the two systems. In a companion study, these methods were applied to the full acoustic survey data in order to examine the distribution of euphausiids in relation to aspects of the physical and biological environment [Lawson, G.L., Wiebe, P.H., Ashjian, C.J., Stanton, T.K., 2008. Euphausiid distribution along the Western Antarctic Peninsula—Part B: Distribution of euphausiid aggregations and biomass, and associations with environmental features. Deep-Sea Research II, this issue [doi:10.1016/j.dsr2.2007.11.014

  14. Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia

    NASA Astrophysics Data System (ADS)

    Hussin, F. N.; Rahman, H. A.; Bahar, A.

    2017-09-01

    Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.

  15. A cost-efficient method to assess carbon stocks in tropical peat soil

    NASA Astrophysics Data System (ADS)

    Warren, M. W.; Kauffman, J. B.; Murdiyarso, D.; Anshari, G.; Hergoualc'h, K.; Kurnianto, S.; Purbopuspito, J.; Gusmayanti, E.; Afifudin, M.; Rahajoe, J.; Alhamd, L.; Limin, S.; Iswandi, A.

    2012-11-01

    Estimation of belowground carbon stocks in tropical wetland forests requires funding for laboratory analyses and suitable facilities, which are often lacking in developing nations where most tropical wetlands are found. It is therefore beneficial to develop simple analytical tools to assist belowground carbon estimation where financial and technical limitations are common. Here we use published and original data to describe soil carbon density (kgC m-3; Cd) as a function of bulk density (gC cm-3; Bd), which can be used to rapidly estimate belowground carbon storage using Bd measurements only. Predicted carbon densities and stocks are compared with those obtained from direct carbon analysis for ten peat swamp forest stands in three national parks of Indonesia. Analysis of soil carbon density and bulk density from the literature indicated a strong linear relationship (Cd = Bd × 495.14 + 5.41, R2 = 0.93, n = 151) for soils with organic C content > 40%. As organic C content decreases, the relationship between Cd and Bd becomes less predictable as soil texture becomes an important determinant of Cd. The equation predicted belowground C stocks to within 0.92% to 9.57% of observed values. Average bulk density of collected peat samples was 0.127 g cm-3, which is in the upper range of previous reports for Southeast Asian peatlands. When original data were included, the revised equation Cd = Bd × 468.76 + 5.82, with R2 = 0.95 and n = 712, was slightly below the lower 95% confidence interval of the original equation, and tended to decrease Cd estimates. We recommend this last equation for a rapid estimation of soil C stocks for well-developed peat soils where C content > 40%.

  16. Estimating loop length from CryoEM images at medium resolutions.

    PubMed

    McKnight, Andrew; Si, Dong; Al Nasr, Kamal; Chernikov, Andrey; Chrisochoides, Nikos; He, Jing

    2013-01-01

    De novo protein modeling approaches utilize 3-dimensional (3D) images derived from electron cryomicroscopy (CryoEM) experiments. The skeleton connecting two secondary structures such as α-helices represent the loop in the 3D image. The accuracy of the skeleton and of the detected secondary structures are critical in De novo modeling. It is important to measure the length along the skeleton accurately since the length can be used as a constraint in modeling the protein. We have developed a novel computational geometric approach to derive a simplified curve in order to estimate the loop length along the skeleton. The method was tested using fifty simulated density images of helix-loop-helix segments of atomic structures and eighteen experimentally derived density data from Electron Microscopy Data Bank (EMDB). The test using simulated density maps shows that it is possible to estimate within 0.5 Å of the expected length for 48 of the 50 cases. The experiments, involving eighteen experimentally derived CryoEM images, show that twelve cases have error within 2 Å. The tests using both simulated and experimentally derived images show that it is possible for our proposed method to estimate the loop length along the skeleton if the secondary structure elements, such as α-helices, can be detected accurately, and there is a continuous skeleton linking the α-helices.

  17. SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.

    PubMed

    Cao, Yuan; He, Haibo; Man, Hong

    2012-08-01

    In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.

  18. When bulk density methods matter: Implications for estimating soil organic carbon pools in rocky soils

    USDA-ARS?s Scientific Manuscript database

    Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

  19. Assessing housing growth when census boundaries change

    Treesearch

    Alexandra D. Syphard; Susan I. Stewart; Jason McKeefry; Roger B. Hammer; Jeremy S. Fried; Sherry Holcomb; Volker C. Radeloff

    2009-01-01

    The US Census provides the primary source of spatially explicit social data, but changing block boundaries complicate analyses of housing growth over time. We compared procedures for reconciling housing density data between 1990 and 2000 census block boundaries in order to assess the sensitivity of analytical methods to estimates of housing growth in Oregon. Estimates...

  20. Unbiased estimators for spatial distribution functions of classical fluids

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  1. Estimation and simulation of multi-beam sonar noise.

    PubMed

    Holmin, Arne Johannes; Korneliussen, Rolf J; Tjøstheim, Dag

    2016-02-01

    Methods for the estimation and modeling of noise present in multi-beam sonar data, including the magnitude, probability distribution, and spatial correlation of the noise, are developed. The methods consider individual acoustic samples and facilitate compensation of highly localized noise as well as subtraction of noise estimates averaged over time. The modeled noise is included in an existing multi-beam sonar simulation model [Holmin, Handegard, Korneliussen, and Tjøstheim, J. Acoust. Soc. Am. 132, 3720-3734 (2012)], resulting in an improved model that can be used to strengthen interpretation of data collected in situ at any signal to noise ratio. Two experiments, from the former study in which multi-beam sonar data of herring schools were simulated, are repeated with inclusion of noise. These experiments demonstrate (1) the potentially large effect of changes in fish orientation on the backscatter from a school, and (2) the estimation of behavioral characteristics such as the polarization and packing density of fish schools. The latter is achieved by comparing real data with simulated data for different polarizations and packing densities.

  2. Estimation of Dry Fracture Weakness, Porosity, and Fluid Modulus Using Observable Seismic Reflection Data in a Gas-Bearing Reservoir

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2017-05-01

    Fracture detection and fluid identification are important tasks for a fractured reservoir characterization. Our goal is to demonstrate a direct approach to utilize azimuthal seismic data to estimate fluid bulk modulus, porosity, and dry fracture weaknesses, which decreases the uncertainty of fluid identification. Combining Gassmann's (Vier. der Natur. Gesellschaft Zürich 96:1-23, 1951) equations and linear-slip model, we first establish new simplified expressions of stiffness parameters for a gas-bearing saturated fractured rock with low porosity and small fracture density, and then we derive a novel PP-wave reflection coefficient in terms of dry background rock properties (P-wave and S-wave moduli, and density), fracture (dry fracture weaknesses), porosity, and fluid (fluid bulk modulus). A Bayesian Markov chain Monte Carlo nonlinear inversion method is proposed to estimate fluid bulk modulus, porosity, and fracture weaknesses directly from azimuthal seismic data. The inversion method yields reasonable estimates in the case of synthetic data containing a moderate noise and stable results on real data.

  3. Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials

    PubMed Central

    Carleton, James B.; D'Amore, Antonio; Feaver, Kristen R.; Rodin, Gregory J.; Sacks, Michael S.

    2014-01-01

    Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. The present work addresses these issues in two ways. First, using methods of geometric probability we develop theoretical estimates for the mean linear and areal fiber intersection densities for two-dimensional fibrous networks. These densities are expressed in terms of the fiber density and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of two-dimensional fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of SEM images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. The methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data. PMID:25311685

  4. Estimating the mass density in the thermosphere with the CYGNSS mission.

    NASA Astrophysics Data System (ADS)

    Bussy-Virat, C.; Ridley, A. J.

    2017-12-01

    The Cyclone Global Navigation Satellite System (CYGNSS) mission, launched in December 2016, is a constellation of eight satellites orbiting the Earth at 510 km. Its goal is to improve our understanding of rapid hurricane wind intensification. Each CYGNSS satellite uses GPS signals that are reflected off of the ocean's surface to measure the wind. The GPS can also be used to specify the orbit of the satellites quite precisely. The motion of satellites in low Earth orbit are greatly influenced by the neutral density of the surrounding atmosphere through drag. Modeling the neutral density in the upper atmosphere is a major challenge as it involves a comprehensive understanding of the complex coupling between the thermosphere and the ionosphere, the magnetosphere, and the Sun. This is why thermospheric models (such as NRLMSIS, Jacchia-Bowman, HASDM, GITM, or TIEGCM) can only approximate it with a limited accuracy, which decreases during strong geomagnetic events. Because atmospheric drag directly depends on the thermospheric density, it can be estimated applying filtering methods to the trajectories of the CYGNSS observatories. The CYGNSS mission can provide unique results since the constellation of eight satellites enables multiple measurements of the same region at close intervals ( 10 minutes), which can be used to detect short time scale features. Moreover, the CYGNSS spacecraft can be pitched from a low to high drag attitude configuration, which can be used in the filtering methods to improve the accuracy of the atmospheric density estimation. The methodology and the results of this approach applied to the CYGNSS mission will be presented.

  5. Investigating the ability of solar coronal shocks to accelerate solar energetic particles

    NASA Astrophysics Data System (ADS)

    Kwon, R. Y.; Vourlidas, A.

    2017-12-01

    We estimate the density compression ratio of shocks associated with coronal mass ejections (CMEs) and investigate whether they can accelerate solar energetic particles (SEPs). Using remote-sensing, multi-viewpoint coronagraphic observations, we have developed a method to extract the sheath electron density profiles along the shock normal and estimate the density compression ratio. Our method uses the ellipsoid model to derive the 3D geometry of the sheaths, including the line-of-sight (LOS) depth. The sheath density profiles along the shock normal are modeled with double-Gaussian functions, and the modeled densities are integrated along the LOSs to be compared with the observed brightness in STEREO COR2-Ahead. The upstream densities are derived from either the pB-inversion of the brightness in a pre-event image or an empirical model. We analyze two fast halo CMEs observed on 2011 March 7 and 2014 February 25 that are associated with SEP events detected by multiple spacecraft located over a broad range of heliolongitudes. We find that the density compression peaks around the CME nose and decreases at larger position angles. Interestingly, we find that the supercritical region extends over a large area of the shock and lasts longer (several tens of minutes) than past reports. This finding implies that CME shocks may be capable of accelerating energetic particles in the corona over extended spatial and temporal scales and may, therefore, be responsible for the wide longitudinal distribution of these particles in the inner heliosphere.

  6. Accuracy, precision, and economic efficiency for three methods of thrips (Thysanoptera: Thripidae) population density assessment.

    PubMed

    Sutherland, Andrew M; Parrella, Michael P

    2011-08-01

    Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.

  7. Preantral follicle density in ovarian biopsy fragments and effects of mare age.

    PubMed

    Alves, K A; Alves, B G; Gastal, G D A; Haag, K T; Gastal, M O; Figueiredo, J R; Gambarini, M L; Gastal, E L

    2017-04-01

    The aims of the present study were to: (1) evaluate preantral follicle density in ovarian biopsy fragments within and among mares; (2) assess the effects of mare age on the density and quality of preantral follicles; and (3) determine the minimum number of ovarian fragments and histological sections needed to estimate equine follicle density using a mathematical model. The ovarian biopsy pick-up method was used in three groups of mares separated according to age (5-6, 7-10 and 11-16 years). Overall, 336 preantral follicles were recorded with a mean follicle density of 3.7 follicles per cm 2 . Follicle density differed (P<0.05) among animals, ovarian fragments from the same animal, histological sections and age groups. More (P<0.05) normal follicles were observed in the 5-6 years (97%) than the 11-16 years (84%) age group. Monte Carlo simulations showed a higher probability (90%; P<0.05) of detecting follicle density using two experimental designs with 65 histological sections and three to four ovarian fragments. In summary, equine follicle density differed among animals and within ovarian fragments from the same animal, and follicle density and morphology were negatively affected by aging. Moreover, three to four ovarian fragments with 65 histological sections were required to accurately estimate follicle density in equine ovarian biopsy fragments.

  8. Estimating snow leopard population abundance using photography and capture-recapture techniques

    USGS Publications Warehouse

    Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.

    2006-01-01

    Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.

  9. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  10. Remote estimation of crown size and tree density in snowy areas

    NASA Astrophysics Data System (ADS)

    Kishi, R.; Ito, A.; Kamada, K.; Fukita, T.; Lahrita, L.; Kawase, Y.; Murahashi, K.; Kawamata, H.; Naruse, N.; Takahashi, Y.

    2017-12-01

    Precise estimation of tree density in the forest leads us to understand the amount of carbon dioxide fixed by plants. Aerial photographs have been used to measure the number of trees. Campaign using aircraft, however, is expensive ( $50,000/1 campaign flight) and the research area is limited in drone. In addition, previous studies estimating the density of trees from aerial photographs have been performed in the summer, so there was a gap of 15% in the estimation due to the overlapping of the leaves. Here, we have proposed a method to accurately estimate the number of forest trees from the satellite images of snow-covered deciduous forest area, using the ratio of branches to snow. The advantages of our method are as follows; 1) snow area could be excluded easily due to the high reflectance, 2) tree branches are small overlapping compared to leaves. Although our method can use only in the snowfall region, the area covered with snow in the world becomes more than 12,800,000 km2. Our proposition should play an important role in discussing global warming. As a test area, we have chosen the forest near Mt. Amano in Iwate prefecture in Japan. First, we made a new index of (Band1-Band5)/(Band1+Band5), which will be suitable to distinguish between the snow and the tree trunk using the corresponding spectral reflection data. Next, the index values of changing the ratio in 1% increments were listed. From the satellite image analysis at 4 points, the ratio of snow to tree trunk showed the following values, I:61%, II:65%, III:66% and IV:65%. To confirm the estimation, we used the aerial photograph from Google earth; the rate was I:42.05%, II:48.89%, III:50.64%, IV:49.05%, respectively. There is a correlation between the numerical values of both, but there are differences. We will discuss in detail at this point, focusing on the effect of shadows.

  11. Can pelagic forage fish and spawning cisco (Coregonus artedi) biomass in the western arm of Lake Superior be assessed with a single summer survey?

    USGS Publications Warehouse

    Yule, D.L.; Stockwell, J.D.; Schreiner, D.R.; Evrard, L.M.; Balge, M.; Hrabik, T.R.

    2009-01-01

    Management efforts to rehabilitate lake trout Salvelinus namaycush in Lake Superior have been successful and the recent increase in their numbers has led to interest in measuring biomass of pelagic prey fish species important to these predators. Lake Superior cisco Coregonus artedi currently support roe fisheries and determining the sustainability of these fisheries is an important management issue. We conducted acoustic and midwater trawl surveys of the western arm of Lake Superior during three periods: summer (July-August), October, and November 2006 to determine if a single survey can be timed to estimate biomass of both prey fish and spawning cisco. We evaluated our methods by comparing observed trawl catches of small (<250 mm total length) and large fish to expected trawl catches based on acoustic densities in the trawl path. We found the relationship between observed and expected catches approached unity over a wide range of densities, suggesting that our acoustic method provided reasonable estimates of fish density, and that midwater trawling methods were free of species- and size-selectivity issues. Rainbow smelt Osmerus mordax was by number the most common species captured in the nearshore (<80 m bathymetric depth) stratum during all three surveys, while kiyi Coregonus kiyi was predominant offshore except during November. Total biomass estimates of rainbow smelt in the western arm were similar during all three surveys, while total biomass of kiyi was similar between summer and October, but was lower in November. Total biomass of large cisco increased substantially in November, while small bloater Coregonus hoyi biomass was lower. We compared our summer 2006 estimates of total fish biomass to the results of a summer survey in 1997 and obtained similar results. We conclude that the temporal window for obtaining biomass estimates of pelagic prey species in the western arm of Lake Superior is wide (July through October), but estimating spawning cisco abundance is best done with a November survey.

  12. Estimation of lattice strain in nanocrystalline RuO2 by Williamson-Hall and size-strain plot methods

    NASA Astrophysics Data System (ADS)

    Sivakami, R.; Dhanuskodi, S.; Karvembu, R.

    2016-01-01

    RuO2 nanoparticles (RuO2 NPs) have been successfully synthesized by the hydrothermal method. Structure and the particle size have been determined by X-ray diffraction (XRD), scanning electron microscopy (SEM), atomic force microscopy (AFM) and transmission electron microscopy (TEM). UV-Vis spectra reveal that the optical band gap of RuO2 nanoparticles is red shifted from 3.95 to 3.55 eV. BET measurements show a high specific surface area (SSA) of 118-133 m2/g and pore diameter (10-25 nm) has been estimated by Barret-Joyner-Halenda (BJH) method. The crystallite size and lattice strain in the samples have been investigated by Williamson-Hall (W-H) analysis assuming uniform deformation, deformation stress and deformation energy density, and the size-strain plot method. All other relevant physical parameters including stress, strain and energy density have been calculated. The average crystallite size and the lattice strain evaluated from XRD measurements are in good agreement with the results of TEM.

  13. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  14. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  15. Combining binary decision tree and geostatistical methods to estimate snow distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, Benjamin; Elder, Kelly

    2000-01-01

    We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.

  16. An isometric muscle force estimation framework based on a high-density surface EMG array and an NMF algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu

    2017-08-01

    Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.

  17. Development of a phantom to test fully automated breast density software - A work in progress.

    PubMed

    Waade, G G; Hofvind, S; Thompson, J D; Highnam, R; Hogg, P

    2017-02-01

    Mammographic density (MD) is an independent risk factor for breast cancer and may have a future role for stratified screening. Automated software can estimate MD but the relationship between breast thickness reduction and MD is not fully understood. Our aim is to develop a deformable breast phantom to assess automated density software and the impact of breast thickness reduction on MD. Several different configurations of poly vinyl alcohol (PVAL) phantoms were created. Three methods were used to estimate their density. Raw image data of mammographic images were processed using Volpara to estimate volumetric breast density (VBD%); Hounsfield units (HU) were measured on CT images; and physical density (g/cm 3 ) was calculated using a formula involving mass and volume. Phantom volume versus contact area and phantom volume versus phantom thickness was compared to values of real breasts. Volpara recognized all deformable phantoms as female breasts. However, reducing the phantom thickness caused a change in phantom density and the phantoms were not able to tolerate same level of compression and thickness reduction experienced by female breasts during mammography. Our results are promising as all phantoms resulted in valid data for automated breast density measurement. Further work should be conducted on PVAL and other materials to produce deformable phantoms that mimic female breast structure and density with the ability of being compressed to the same level as female breasts. We are the first group to have produced deformable phantoms that are recognized as breasts by Volpara software. Copyright © 2016 The College of Radiographers. All rights reserved.

  18. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  19. An accurate cost effective DFT approach to study the sensing behaviour of polypyrrole towards nitrate ions in gas and aqueous phases.

    PubMed

    Wasim, Fatima; Mahmood, Tariq; Ayub, Khurshid

    2016-07-28

    Density functional theory (DFT) calculations have been performed to study the response of polypyrrole towards nitrate ions in gas and aqueous phases. First, an accurate estimate of interaction energies is obtained by methods calibrated against the gold standard CCSD(T) method. Then, a number of low cost DFT methods are also evaluated for their ability to accurately estimate the binding energies of polymer-nitrate complexes. The low cost methods evaluated here include dispersion corrected potential (DCP), Grimme's D3 correction, counterpoise correction of the B3LYP method, and Minnesota functionals (M05-2X). The interaction energies calculated using the counterpoise (CP) correction and DCP methods at the B3LYP level are in better agreement with the interaction energies calculated using the calibrated methods. The interaction energies of an infinite polymer (polypyrrole) with nitrate ions are calculated by a variety of low cost methods in order to find the associated errors. The electronic and spectroscopic properties of polypyrrole oligomers nPy (where n = 1-9) and nPy-NO3(-) complexes are calculated, and then extrapolated for an infinite polymer through a second degree polynomial fit. Charge analysis, frontier molecular orbital (FMO) analysis and density of state studies also reveal the sensing ability of polypyrrole towards nitrate ions. Interaction energies, charge analysis and density of states analyses illustrate that the response of polypyrrole towards nitrate ions is considerably reduced in the aqueous medium (compared to the gas phase).

  20. Estimation of body density based on hydrostatic weighing without head submersion in young Japanese adults.

    PubMed

    Demura, S; Sato, S; Kitabayashi, T

    2006-06-01

    This study examined a method of predicting body density based on hydrostatic weighing without head submersion (HWwithoutHS). Donnelly and Sintek (1984) developed a method to predict body density based on hydrostatic weight without head submersion. This method predicts the difference (D) between HWwithoutHS and hydrostatic weight with head submersion (HWwithHS) from anthropometric variables (head length and head width), and then calculates body density using D as a correction factor. We developed several prediction equations to estimate D based on head anthropometry and differences between the sexes, and compared their prediction accuracy with Donnelly and Sintek's equation. Thirty-two males and 32 females aged 17-26 years participated in the study. Multiple linear regression analysis was performed to obtain the prediction equations, and the systematic errors of their predictions were assessed by Bland-Altman plots. The best prediction equations obtained were: Males: D(g) = -164.12X1 - 125.81X2 - 111.03X3 + 100.66X4 + 6488.63, where X1 = head length (cm), X2 = head circumference (cm), X3 = head breadth (cm), X4 = head thickness (cm) (R = 0.858, R2 = 0.737, adjusted R2 = 0.687, standard error of the estimate = 224.1); Females: D(g) = -156.03X1 - 14.03X2 - 38.45X3 - 8.87X4 + 7852.45, where X1 = head circumference (cm), X2 = body mass (g), X3 = head length (cm), X4 = height (cm) (R = 0.913, R2 = 0.833, adjusted R2 = 0.808, standard error of the estimate = 137.7). The effective predictors in these prediction equations differed from those of Donnelly and Sintek's equation, and head circumference and head length were included in both equations. The prediction accuracy was improved by statistically selecting effective predictors. Since we did not assess cross-validity, the equations cannot be used to generalize to other populations, and further investigation is required.

  1. A robust method for estimating motorbike count based on visual information learning

    NASA Astrophysics Data System (ADS)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  2. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    NASA Astrophysics Data System (ADS)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  3. Carbon storage in China's forest ecosystems: estimation by different integrative methods.

    PubMed

    Peng, Shunlei; Wen, Ding; He, Nianpeng; Yu, Guirui; Ma, Anna; Wang, Qiufeng

    2016-05-01

    Carbon (C) storage for all the components, especially dead mass and soil organic carbon, was rarely reported and remained uncertainty in China's forest ecosystems. This study used field-measured data published between 2004 and 2014 to estimate C storage by three forest type classifications and three spatial interpolations and assessed the uncertainty in C storage resulting from different integrative methods in China's forest ecosystems. The results showed that C storage in China's forest ecosystems ranged from 30.99 to 34.96 Pg C by the six integrative methods. We detected 5.0% variation (coefficient of variation, CV, %) among the six methods, which was influenced mainly by soil C estimates. Soil C density and storage in the 0-100 cm soil layer were estimated to be 136.11-153.16 Mg C·ha(-1) and 20.63-23.21 Pg C, respectively. Dead mass C density and storage were estimated to be 3.66-5.41 Mg C·ha(-1) and 0.68-0.82 Pg C, respectively. Mean C storage in China's forest ecosystems estimated by the six integrative methods was 8.557 Pg C (25.8%) for aboveground biomass, 1.950 Pg C (5.9%) for belowground biomass, 0.697 Pg C (2.1%) for dead mass, and 21.958 Pg C (66.2%) for soil organic C in the 0-100 cm soil layer. The R:S ratio was 0.23, and C storage in the soil was 2.1 times greater than in the vegetation. Carbon storage estimates with respect to forest type classification (38 forest subtypes) were closer to the average value than those calculated using the spatial interpolation methods. Variance among different methods and data sources may partially explain the high uncertainty of C storage detected by different studies. This study demonstrates the importance of using multimethodological approaches to estimate C storage accurately in the large-scale forest ecosystems.

  4. Measurement of Average Aggregate Density by Sedimentation and Brownian Motion Analysis.

    PubMed

    Cavicchi, Richard E; King, Jason; Ripple, Dean C

    2018-05-01

    The spatially averaged density of protein aggregates is an important parameter that can be used to relate size distributions measured by orthogonal methods, to characterize protein particles, and perhaps to estimate the amount of protein in aggregate form in a sample. We obtained a series of images of protein aggregates exhibiting Brownian diffusion while settling under the influence of gravity in a sealed capillary. The aggregates were formed by stir-stressing a monoclonal antibody (NISTmAb). Image processing yielded particle tracks, which were then examined to determine settling velocity and hydrodynamic diameter down to 1 μm based on mean square displacement analysis. Measurements on polystyrene calibration microspheres ranging in size from 1 to 5 μm showed that the mean square displacement diameter had improved accuracy over the diameter derived from imaged particle area, suggesting a future method for correcting size distributions based on imaging. Stokes' law was used to estimate the density of each particle. It was found that the aggregates were highly porous with density decreasing from 1.080 to 1.028 g/cm 3 as the size increased from 1.37 to 4.9 μm. Published by Elsevier Inc.

  5. A new approach on seismic mortality estimations based on average population density

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  6. Comparison of subjective and fully automated methods for measuring mammographic density.

    PubMed

    Moshina, Nataliia; Roman, Marta; Sebuødegård, Sofie; Waade, Gunvor G; Ursin, Giske; Hofvind, Solveig

    2018-02-01

    Background Breast radiologists of the Norwegian Breast Cancer Screening Program subjectively classified mammographic density using a three-point scale between 1996 and 2012 and changed into the fourth edition of the BI-RADS classification since 2013. In 2015, an automated volumetric breast density assessment software was installed at two screening units. Purpose To compare volumetric breast density measurements from the automated method with two subjective methods: the three-point scale and the BI-RADS density classification. Material and Methods Information on subjective and automated density assessment was obtained from screening examinations of 3635 women recalled for further assessment due to positive screening mammography between 2007 and 2015. The score of the three-point scale (I = fatty; II = medium dense; III = dense) was available for 2310 women. The BI-RADS density score was provided for 1325 women. Mean volumetric breast density was estimated for each category of the subjective classifications. The automated software assigned volumetric breast density to four categories. The agreement between BI-RADS and volumetric breast density categories was assessed using weighted kappa (k w ). Results Mean volumetric breast density was 4.5%, 7.5%, and 13.4% for categories I, II, and III of the three-point scale, respectively, and 4.4%, 7.5%, 9.9%, and 13.9% for the BI-RADS density categories, respectively ( P for trend < 0.001 for both subjective classifications). The agreement between BI-RADS and volumetric breast density categories was k w  = 0.5 (95% CI = 0.47-0.53; P < 0.001). Conclusion Mean values of volumetric breast density increased with increasing density category of the subjective classifications. The agreement between BI-RADS and volumetric breast density categories was moderate.

  7. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  8. Estimation of total discharged mass from the phreatic eruption of Ontake Volcano, central Japan, on September 27, 2014

    NASA Astrophysics Data System (ADS)

    Takarada, Shinji; Oikawa, Teruki; Furukawa, Ryuta; Hoshizumi, Hideo; Itoh, Jun'ichi; Geshi, Nobuo; Miyagi, Isoji

    2016-08-01

    The total mass discharged by the phreatic eruption of Ontake Volcano, central Japan, on September 27, 2014, was estimated using several methods. The estimated discharged mass was 1.2 × 106 t (segment integration method), 8.9 × 105 t (Pyle's exponential method), and varied from 8.6 × 103 to 2.5 × 106 t (Hayakawa's single isopach method). The segment integration and Pyle's exponential methods gave similar values. The single isopach method, however, gave a wide range of results depending on which contour was used. Therefore, the total discharged mass of the 2014 eruption is estimated at between 8.9 × 105 and 1.2 × 106 t. More than 90 % of the total mass accumulated within the proximal area. This shows how important it is to include a proximal area field survey for the total mass estimation of phreatic eruptions. A detailed isopleth mass distribution map was prepared covering as far as 85 km from the source. The main ash-fall dispersal was ENE in the proximal and medial areas and E in the distal area. The secondary distribution lobes also extended to the S and NW proximally, reflecting the effects of elutriation ash and surge deposits from pyroclastic density currents during the phreatic eruption. The total discharged mass of the 1979 phreatic eruption was also calculated for comparison. The resulting volume of 1.9 × 106 t (using the segment integration method) indicates that it was about 1.6-2.1 times larger than the 2014 eruption. The estimated average discharged mass flux rate of the 2014 eruption was 1.7 × 108 kg/h and for the 1979 eruption was 1.0 × 108 kg/h. One of the possible reasons for the higher flux rate of the 2014 eruption is the occurrence of pyroclastic density currents at the summit area.

  9. Use of geographic information systems in rabies vaccination campaigns.

    PubMed

    Grisi-Filho, José Henrique de Hildebrand e; Amaku, Marcos; Dias, Ricardo Augusto; Montenegro Netto, Hildebrando; Paranhos, Noemia Tucunduva; Mendes, Maria Cristina Novo Campos; Ferreira Neto, José Soares; Ferreira, Fernando

    2008-12-01

    To develop a method to assist in the design and assessment of animal rabies control campaigns. A methodology was developed based on geographic information systems to estimate the animal (canine and feline) population and density per census tract and per subregion (known as "Subprefeituras") in the city of São Paulo (Southeastern Brazil) in 2002. The number of vaccination units in a given region was estimated to achieve a certain proportion of vaccination coverage. Census database was used for the human population, as well as estimates ratios of dog:inhabitant and cat:inhabitant. Estimated figures were 1,490,500 dogs and 226,954 cats in the city, i.e. an animal population density of 1138.14 owned animals per km(2). In the 2002 campaign, 926,462 were vaccinated, resulting in a vaccination coverage of 54%. The estimated number of vaccination units to be able to reach a 70%-vaccination coverage, by vaccinating 700 animals per unit on average, was 1,729. These estimates are presented as maps of animal density according to census tracts and "Subprefeituras". The methodology used in the study may be applied in a systematic way to the design and evaluation of rabies vaccination campaigns, enabling the identification of areas of critical vaccination coverage.

  10. Brain Tissue Compartment Density Estimated Using Diffusion-Weighted MRI Yields Tissue Parameters Consistent With Histology

    PubMed Central

    Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi

    2015-01-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639

  11. Near surface bulk density estimates of NEAs from radar observations and permittivity measurements of powdered geologic material

    NASA Astrophysics Data System (ADS)

    Hickson, Dylan; Boivin, Alexandre; Daly, Michael G.; Ghent, Rebecca; Nolan, Michael C.; Tait, Kimberly; Cunje, Alister; Tsai, Chun An

    2018-05-01

    The variations in near-surface properties and regolith structure of asteroids are currently not well constrained by remote sensing techniques. Radar is a useful tool for such determinations of Near-Earth Asteroids (NEAs) as the power of the reflected signal from the surface is dependent on the bulk density, ρbd, and dielectric permittivity. In this study, high precision complex permittivity measurements of powdered aluminum oxide and dunite samples are used to characterize the change in the real part of the permittivity with the bulk density of the sample. In this work, we use silica aerogel for the first time to increase the void space in the samples (and decrease the bulk density) without significantly altering the electrical properties. We fit various mixing equations to the experimental results. The Looyenga-Landau-Lifshitz mixing formula has the best fit and the Lichtenecker mixing formula, which is typically used to approximate planetary regolith, does not model the results well. We find that the Looyenga-Landau-Lifshitz formula adequately matches Lunar regolith permittivity measurements, and we incorporate it into an existing model for obtaining asteroid regolith bulk density from radar returns which is then used to estimate the bulk density in the near surface of NEA's (101955) Bennu and (25143) Itokawa. Constraints on the material properties appropriate for either asteroid give average estimates of ρbd = 1.27 ± 0.33g/cm3 for Bennu and ρbd = 1.68 ± 0.53g/cm3 for Itokawa. We conclude that our data suggest that the Looyenga-Landau-Lifshitz mixing model, in tandem with an appropriate radar scattering model, is the best method for estimating bulk densities of regoliths from radar observations of airless bodies.

  12. Spatially explicit inference for open populations: estimating demographic parameters from camera-trap studies

    USGS Publications Warehouse

    Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J. Andrew

    2010-01-01

    We develop a hierarchical capture–recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture–recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture–recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.

  13. Estimating metallicities with isochrone fits to photometric data of open clusters

    NASA Astrophysics Data System (ADS)

    Monteiro, H.; Oliveira, A. F.; Dias, W. S.; Caetano, T. C.

    2014-10-01

    The metallicity is a critical parameter that affects the correct determination of stellar cluster's fundamental characteristics and has important implications in Galactic and Stellar evolution research. Fewer than 10% of the 2174 currently catalogued open clusters have their metallicity determined in the literature. In this work we present a method for estimating the metallicity of open clusters via non-subjective isochrone fitting using the cross-entropy global optimization algorithm applied to UBV photometric data. The free parameters distance, reddening, age, and metallicity are simultaneously determined by the fitting method. The fitting procedure uses weights for the observational data based on the estimation of membership likelihood for each star, which considers the observational magnitude limit, the density profile of stars as a function of radius from the center of the cluster, and the density of stars in multi-dimensional magnitude space. We present results of [Fe/H] for well-studied open clusters based on distinct UBV data sets. The [Fe/H] values obtained in the ten cases for which spectroscopic determinations were available in the literature agree, indicating that our method provides a good alternative to estimating [Fe/H] by using an objective isochrone fitting. Our results show that the typical precision is about 0.1 dex.

  14. Spatially explicit inference for open populations: estimating demographic parameters from camera-trap studies.

    PubMed

    Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J Andrew

    2010-11-01

    We develop a hierarchical capture-recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture-recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture-recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.

  15. Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.

    PubMed

    Korth, Martin

    2013-10-14

    The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic.

  16. Concrete density estimation by rebound hammer method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismail, Mohamad Pauzi bin, E-mail: pauzi@nm.gov.my; Masenwat, Noor Azreen bin; Sani, Suhairy bin

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containingmore » crushed granite.« less

  17. Identification of the population density of a species model with nonlocal diffusion and nonlinear reaction

    NASA Astrophysics Data System (ADS)

    Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel

    2017-05-01

    The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.

  18. Density Estimation for New Solid and Liquid Explosives

    DTIC Science & Technology

    1977-02-17

    The group additivity approach was shown to be applicable to density estimation. The densities of approximately 180 explosives and related compounds... of very diverse compositions were estimated, and almost all the estimates were quite reasonable. Of the 168 compounds for which direct comparisons...could be made (see Table 6), 36.9% of the estimated densities were within 1% of the measured densities, 33.3% were within 1-2%, 11.9% were within 2-3

  19. Statistical density modification using local pattern matching

    DOEpatents

    Terwilliger, Thomas C.

    2007-01-23

    A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.

  20. Infrared thermography for wood density estimation

    NASA Astrophysics Data System (ADS)

    López, Gamaliel; Basterra, Luis-Alfonso; Acuña, Luis

    2018-03-01

    Infrared thermography (IRT) is becoming a commonly used technique to non-destructively inspect and evaluate wood structures. Based on the radiation emitted by all objects, this technique enables the remote visualization of the surface temperature without making contact using a thermographic device. The process of transforming radiant energy into temperature depends on many parameters, and interpreting the results is usually complicated. However, some works have analyzed the operation of IRT and expanded its applications, as found in the latest literature. This work analyzes the effect of density on the thermodynamic behavior of timber to be determined by IRT. The cooling of various wood samples has been registered, and a statistical procedure that enables one to quantitatively estimate the density of timber has been designed. This procedure represents a new method to physically characterize this material.

  1. Comparison of methods to monitor the distribution and impacts of unauthorized travel routes in a border park

    USGS Publications Warehouse

    Esque, Todd C.; Inman, Rich; Nussear, Kenneth E.; Webb, Robert; Girard, M.M.; DeGayner, J.

    2016-01-01

    The distribution and abundance of human-caused disturbances vary greatly through space and time and are cause for concern among land stewards in natural areas of the southwestern border-lands between the USA and Mexico. Human migration and border protection along the international boundary create Unauthorized Trail and Road (UTR) networks across National Park Service lands and other natural areas. UTRs may cause soil erosion and compaction, damage to vegetation and cultural resources, and may stress wildlife or impede their movements. We quantify the density and severity of UTR disturbances in relation to soils, and compare the use of previously established targeted trail assessments (hereafter — targeted assessments) against randomly placed transects to detect trail densities at Coronado National Memorial in Arizona in 2011. While trail distributions were similar between methods, targeted assessments estimated a large portion of the park to have the lowest density category (0–5 trail encounters per/km2), whereas the random transects in 2011 estimated more of the park as having the higher density categories (e.g., 15–20 encounters per km2category). Soil vulnerability categories that were assigned, a priori, based on published soil texture and composition did not accurately predict the impact of UTRs on soil, indicating that empirical methods may be better suited for identifying severity of compaction. While the estimates of UTR encounter frequencies were greater using the random transects than the targeted assessments for a relatively short period of time, it is difficult to determine whether this difference is dependent on greater cross-border activity, differences in technique, or from confounding environmental factors. Future surveys using standardized sampling techniques would increase accuracy.

  2. Estimates of Forest Biomass Carbon Storage in Liaoning Province of Northeast China: A Review and Assessment

    PubMed Central

    Yu, Dapao; Wang, Xiaoyu; Yin, You; Zhan, Jinyu; Lewis, Bernard J.; Tian, Jie; Bao, Ye; Zhou, Wangming; Zhou, Li; Dai, Limin

    2014-01-01

    Accurate estimates of forest carbon storage and changes in storage capacity are critical for scientific assessment of the effects of forest management on the role of forests as carbon sinks. Up to now, several studies reported forest biomass carbon (FBC) in Liaoning Province based on data from China's Continuous Forest Inventory, however, their accuracy were still not known. This study compared estimates of FBC in Liaoning Province derived from different methods. We found substantial variation in estimates of FBC storage for young and middle-age forests. For provincial forests with high proportions in these age classes, the continuous biomass expansion factor method (CBM) by forest type with age class is more accurate and therefore more appropriate for estimating forest biomass. Based on the above approach designed for this study, forests in Liaoning Province were found to be a carbon sink, with carbon stocks increasing from 63.0 TgC in 1980 to 120.9 TgC in 2010, reflecting an annual increase of 1.9 TgC. The average carbon density of forest biomass in the province has increased from 26.2 Mg ha−1 in 1980 to 31.0 Mg ha−1 in 2010. While the largest FBC occurred in middle-age forests, the average carbon density decreased in this age class during these three decades. The increase in forest carbon density resulted primarily from the increased area and carbon storage of mature forests. The relatively long age interval in each age class for slow-growing forest types increased the uncertainty of FBC estimates by CBM-forest type with age class, and further studies should devote more attention to the time span of age classes in establishing biomass expansion factors for use in CBM calculations. PMID:24586881

  3. Estimates of forest biomass carbon storage inLiaoning Province of Northeast China: a review and assessment.

    PubMed

    Yu, Dapao; Wang, Xiaoyu; Yin, You; Zhan, Jinyu; Lewis, Bernard J; Tian, Jie; Bao, Ye; Zhou, Wangming; Zhou, Li; Dai, Limin

    2014-01-01

    Accurate estimates of forest carbon storage and changes in storage capacity are critical for scientific assessment of the effects of forest management on the role of forests as carbon sinks. Up to now, several studies reported forest biomass carbon (FBC) in Liaoning Province based on data from China's Continuous Forest Inventory, however, their accuracy were still not known. This study compared estimates of FBC in Liaoning Province derived from different methods. We found substantial variation in estimates of FBC storage for young and middle-age forests. For provincial forests with high proportions in these age classes, the continuous biomass expansion factor method (CBM) by forest type with age class is more accurate and therefore more appropriate for estimating forest biomass. Based on the above approach designed for this study, forests in Liaoning Province were found to be a carbon sink, with carbon stocks increasing from 63.0 TgC in 1980 to 120.9 TgC in 2010, reflecting an annual increase of 1.9 TgC. The average carbon density of forest biomass in the province has increased from 26.2 Mg ha(-1) in 1980 to 31.0 Mg ha(-1) in 2010. While the largest FBC occurred in middle-age forests, the average carbon density decreased in this age class during these three decades. The increase in forest carbon density resulted primarily from the increased area and carbon storage of mature forests. The relatively long age interval in each age class for slow-growing forest types increased the uncertainty of FBC estimates by CBM-forest type with age class, and further studies should devote more attention to the time span of age classes in establishing biomass expansion factors for use in CBM calculations.

  4. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.

  5. Imaging Breast Density: Established and Emerging Modalities1

    PubMed Central

    Chen, Jeon-Hor; Gulsen, Gultekin; Su, Min-Ying

    2015-01-01

    Mammographic density has been proven as an independent risk factor for breast cancer. Women with dense breast tissue visible on a mammogram have a much higher cancer risk than women with little density. A great research effort has been devoted to incorporate breast density into risk prediction models to better estimate each individual’s cancer risk. In recent years, the passage of breast density notification legislation in many states in USA requires that every mammography report should provide information regarding the patient’s breast density. Accurate definition and measurement of breast density are thus important, which may allow all the potential clinical applications of breast density to be implemented. Because the two-dimensional mammography-based measurement is subject to tissue overlapping and thus not able to provide volumetric information, there is an urgent need to develop reliable quantitative measurements of breast density. Various new imaging technologies are being developed. Among these new modalities, volumetric mammographic density methods and three-dimensional magnetic resonance imaging are the most well studied. Besides, emerging modalities, including different x-ray–based, optical imaging, and ultrasound-based methods, have also been investigated. All these modalities may either overcome some fundamental problems related to mammographic density or provide additional density and/or compositional information. The present review article aimed to summarize the current established and emerging imaging techniques for the measurement of breast density and the evidence of the clinical use of these density methods from the literature. PMID:26692524

  6. Model-based estimators of density and connectivity to inform conservation of spatially structured populations

    USGS Publications Warehouse

    Morin, Dana J.; Fuller, Angela K.; Royle, J. Andrew; Sutherland, Chris

    2017-01-01

    Conservation and management of spatially structured populations is challenging because solutions must consider where individuals are located, but also differential individual space use as a result of landscape heterogeneity. A recent extension of spatial capture–recapture (SCR) models, the ecological distance model, uses spatial encounter histories of individuals (e.g., a record of where individuals are detected across space, often sequenced over multiple sampling occasions), to estimate the relationship between space use and characteristics of a landscape, allowing simultaneous estimation of both local densities of individuals across space and connectivity at the scale of individual movement. We developed two model-based estimators derived from the SCR ecological distance model to quantify connectivity over a continuous surface: (1) potential connectivity—a metric of the connectivity of areas based on resistance to individual movement; and (2) density-weighted connectivity (DWC)—potential connectivity weighted by estimated density. Estimates of potential connectivity and DWC can provide spatial representations of areas that are most important for the conservation of threatened species, or management of abundant populations (i.e., areas with high density and landscape connectivity), and thus generate predictions that have great potential to inform conservation and management actions. We used a simulation study with a stationary trap design across a range of landscape resistance scenarios to evaluate how well our model estimates resistance, potential connectivity, and DWC. Correlation between true and estimated potential connectivity was high, and there was positive correlation and high spatial accuracy between estimated DWC and true DWC. We applied our approach to data collected from a population of black bears in New York, and found that forested areas represented low levels of resistance for black bears. We demonstrate that formal inference about measures of landscape connectivity can be achieved from standard methods of studying animal populations which yield individual encounter history data such as camera trapping. Resulting biological parameters including resistance, potential connectivity, and DWC estimate the spatial distribution and connectivity of the population within a statistical framework, and we outline applications to many possible conservation and management problems.

  7. The Mass of Saturn's B ring from hidden density waves

    NASA Astrophysics Data System (ADS)

    Hedman, M. M.; Nicholson, P. D.

    2015-12-01

    The B ring is Saturn's brightest and most opaque ring, but many of its fundamental parameters, including its total mass, are not well constrained. Elsewhere in the rings, the best mass density estimates come from spiral waves driven by mean-motion resonances with Saturn's various moons, but such waves have been hard to find in the B ring. We have developed a new wavelet-based technique, for combining data from multiple stellar occultations that allows us to isolate the density wave signals from other ring structures. This method has been applied to 5 density waves using 17 occultations of the star gamma Crucis observed by the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft. Two of these waves (generated by the Janus 2:1 and Mimas 5:2 Inner Lindblad Resonances) are visible in individual occultation profiles, but the other three wave signatures ( associated with the Janus 3:2, Enceladus 3:1 and Pandora 3:2 Inner Lindblad Resonances ) are not visible in individual profiles and can only be detected in the combined dataset. Estimates of the ring's surface mass density derived from these five waves fall between 40 and 140 g/cm^2. Surprisingly, these mass density estimates show no obvious correlation with the ring's optical depth. Furthermore, these data indicate that the total mass of the B ring is probably between one-third and two-thirds the mass of Saturn's moon Mimas.

  8. Comparing four methods to estimate usual intake distributions.

    PubMed

    Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P

    2011-07-01

    The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.

  9. Local dark matter and dark energy as estimated on a scale of ~1 Mpc in a self-consistent way

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Teerikorpi, P.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Byrd, G. G.

    2009-12-01

    Context: Dark energy was first detected from large distances on gigaparsec scales. If it is vacuum energy (or Einstein's Λ), it should also exist in very local space. Here we discuss its measurement on megaparsec scales of the Local Group. Aims: We combine the modified Kahn-Woltjer method for the Milky Way-M 31 binary and the HST observations of the expansion flow around the Local Group in order to study in a self-consistent way and simultaneously the local density of dark energy and the dark matter mass contained within the Local Group. Methods: A theoretical model is used that accounts for the dynamical effects of dark energy on a scale of ~1 Mpc. Results: The local dark energy density is put into the range 0.8-3.7ρv (ρv is the globally measured density), and the Local Group mass lies within 3.1-5.8×1012 M⊙. The lower limit of the local dark energy density, about 4/5× the global value, is determined by the natural binding condition for the group binary and the maximal zero-gravity radius. The near coincidence of two values measured with independent methods on scales differing by ~1000 times is remarkable. The mass ~4×1012 M⊙ and the local dark energy density ~ρv are also consistent with the expansion flow close to the Local Group, within the standard cosmological model. Conclusions: One should take into account the dark energy in dynamical mass estimation methods for galaxy groups, including the virial theorem. Our analysis gives new strong evidence in favor of Einstein's idea of the universal antigravity described by the cosmological constant.

  10. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  11. Density and Biomass Estimates by Removal for an Amazonian Crocodilian, Paleosuchus palpebrosus.

    PubMed

    Campos, Zilca; Magnusson, William E

    2016-01-01

    Direct counts of crocodilians are rarely feasible and it is difficult to meet the assumptions of mark-recapture methods for most species in most habitats. Catch-out experiments are also usually not logistically or morally justifiable because it would be necessary to destroy the habitat in order to be confident that most individuals had been captured. We took advantage of the draining and filling of a large area of flooded forest during the building of the Santo Antônio dam on the Madeira River to obtain accurate estimates of the density and biomass of Paleosuchus palpebrosus. The density, 28.4 non-hatchling individuals per km2, is one of the highest reported for any crocodilian, except for species that are temporarily concentrated in small areas during dry-season drought. The biomass estimate of 63.15 kg*km-2 is higher than that for most or even all mammalian carnivores in tropical forest. P. palpebrosus may be one of the World´s most abundant crocodilians.

  12. Age changes in the bone density and structure of the lumbar vertebral column.

    PubMed Central

    Twomey, L; Taylor, J; Furniss, B

    1983-01-01

    Old age is associated with a decline in bone density in lumbar vertebral bodies in both sexes, although the rate and amount of the decline is greatest in females. The bone translucency index method, described in this study, is a sensitive method of estimating bone density. The primary reason for this decline is the significant decrease in the number of transverse trabeculae of lumbar vertebrae in old age. It is postulated that the increase in vertebral end plate concavity and the increased horizontal dimensions of lumbar vertebral bodies in old age follows as a direct consequence of the selective loss of the transverse trabeculae. Images Fig. 2 PMID:6833115

  13. Heating of an Erupting Prominence Associated with a Solar Coronal Mass Ejection on 2012 January 27

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jin-Yi; Moon, Yong-Jae; Kim, Kap-Sung

    2017-07-20

    We investigate the heating of an erupting prominence and loops associated with a coronal mass ejection and X-class flare. The prominence is seen as absorption in EUV at the beginning of its eruption. Later, the prominence changes to emission, which indicates heating of the erupting plasma. We find the densities of the erupting prominence using the absorption properties of hydrogen and helium in different passbands. We estimate the temperatures and densities of the erupting prominence and loops seen as emission features using the differential emission measure method, which uses both EUV and X-ray observations from the Atmospheric Imaging Assembly onmore » board the Solar Dynamics Observatory and the X-ray Telescope on board Hinode . We consider synthetic spectra using both photospheric and coronal abundances in these calculations. We verify the methods for the estimation of temperatures and densities for the erupting plasmas. Then, we estimate the thermal, kinetic, radiative loss, thermal conduction, and heating energies of the erupting prominence and loops. We find that the heating of the erupting prominence and loop occurs strongly at early times in the eruption. This event shows a writhing motion of the erupting prominence, which may indicate a hot flux rope heated by thermal energy release during magnetic reconnection.« less

  14. An extension of the Saltykov method to quantify 3D grain size distributions in mylonites

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco A.; Llana-Fúnez, Sergio

    2016-12-01

    The estimation of 3D grain size distributions (GSDs) in mylonites is key to understanding the rheological properties of crystalline aggregates and to constraining dynamic recrystallization models. This paper investigates whether a common stereological method, the Saltykov method, is appropriate for the study of GSDs in mylonites. In addition, we present a new stereological method, named the two-step method, which estimates a lognormal probability density function describing the 3D GSD. Both methods are tested for reproducibility and accuracy using natural and synthetic data sets. The main conclusion is that both methods are accurate and simple enough to be systematically used in recrystallized aggregates with near-equant grains. The Saltykov method is particularly suitable for estimating the volume percentage of particular grain-size fractions with an absolute uncertainty of ±5 in the estimates. The two-step method is suitable for quantifying the shape of the actual 3D GSD in recrystallized rocks using a single value, the multiplicative standard deviation (MSD) parameter, and providing a precision in the estimate typically better than 5%. The novel method provides a MSD value in recrystallized quartz that differs from previous estimates based on apparent 2D GSDs, highlighting the inconvenience of using apparent GSDs for such tasks.

  15. Innovative Methods for Estimating Densities and Detection Probabilities of Secretive Reptiles Including Invasive Constrictors and Rare Upland Snakes

    DTIC Science & Technology

    2018-01-30

    1  Department of Defense Legacy Resource Management Program Agreement # W9132T-14-2-0010 ( Project # 14-754) Innovative Methods for Estimating...Upland Snakes NA 5c. PROGRAM ELEMENT NUMBER NA 6. AUTHOR(S) 5d. PROJECT NUMBER John D. Willson, Ph.D. 14-754 Shannon Pittman, Ph.D. 5e. TASK NUMBER...STATEMENT Publically available 13. SUPPLEMENTARY NOTES NA 14. ABSTRACT This project demonstrates the broad applicability of a novel simulation

  16. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  17. Passive acoustic measurement of bedload grain size distribution using self-generated noise

    NASA Astrophysics Data System (ADS)

    Petrut, Teodor; Geay, Thomas; Gervaise, Cédric; Belleudy, Philippe; Zanker, Sebastien

    2018-01-01

    Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.

  18. A stepwedge-based method for measuring breast density: observer variability and comparison with human reading

    NASA Astrophysics Data System (ADS)

    Diffey, Jenny; Berks, Michael; Hufton, Alan; Chung, Camilla; Verow, Rosanne; Morrison, Joanna; Wilson, Mary; Boggis, Caroline; Morris, Julie; Maxwell, Anthony; Astley, Susan

    2010-04-01

    Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated, stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening programme to measure breast density by volume and area. 116 images were analysed by three independent operators to assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of glandular tissue is between 10 and 20% of the total thickness of tissue.

  19. Predicting defoliation by the gypsy moth using egg mass counts and a helper variable

    Treesearch

    Michael E. Montgomery

    1991-01-01

    Traditionally, counts of egg masses have been used to predict defoliation by the gypsy moth. Regardless of the method and precision used to obtain the counts, estimates of egg mass density alone often do not provide satisfactory predictions of defoliation. Although defoliation levels greater than 50% are seldom observed if egg mass densities are less than 600 per...

  20. Uncertainty in Estimates of Net Seasonal Snow Accumulation on Glaciers from In Situ Measurements

    NASA Astrophysics Data System (ADS)

    Pulwicki, A.; Flowers, G. E.; Radic, V.

    2017-12-01

    Accurately estimating the net seasonal snow accumulation (or "winter balance") on glaciers is central to assessing glacier health and predicting glacier runoff. However, measuring and modeling snow distribution is inherently difficult in mountainous terrain, resulting in high uncertainties in estimates of winter balance. Our work focuses on uncertainty attribution within the process of converting direct measurements of snow depth and density to estimates of winter balance. We collected more than 9000 direct measurements of snow depth across three glaciers in the St. Elias Mountains, Yukon, Canada in May 2016. Linear regression (LR) and simple kriging (SK), combined with cross correlation and Bayesian model averaging, are used to interpolate estimates of snow water equivalent (SWE) from snow depth and density measurements. Snow distribution patterns are found to differ considerably between glaciers, highlighting strong inter- and intra-basin variability. Elevation is found to be the dominant control of the spatial distribution of SWE, but the relationship varies considerably between glaciers. A simple parameterization of wind redistribution is also a small but statistically significant predictor of SWE. The SWE estimated for one study glacier has a short range parameter (90 m) and both LR and SK estimate a winter balance of 0.6 m w.e. but are poor predictors of SWE at measurement locations. The other two glaciers have longer SWE range parameters ( 450 m) and due to differences in extrapolation, SK estimates are more than 0.1 m w.e. (up to 40%) lower than LR estimates. By using a Monte Carlo method to quantify the effects of various sources of uncertainty, we find that the interpolation of estimated values of SWE is a larger source of uncertainty than the assignment of snow density or than the representation of the SWE value within a terrain model grid cell. For our study glaciers, the total winter balance uncertainty ranges from 0.03 (8%) to 0.15 (54%) m w.e. depending primarily on the interpolation method. Despite the challenges associated with accurately and precisely estimating winter balance, our results are consistent with the previously reported regional accumulation gradient.

  1. Field trials of line transect methods applied to estimation of desert tortoise abundance

    USGS Publications Warehouse

    Anderson, David R.; Burnham, Kenneth P.; Lubow, Bruce C.; Thomas, L. E. N.; Corn, Paul Stephen; Medica, Philip A.; Marlow, R.W.

    2001-01-01

    We examine the degree to which field observers can meet the assumptions underlying line transect sampling to monitor populations of desert tortoises (Gopherus agassizii). We present the results of 2 field trials using artificial tortoise models in 3 size classes. The trials were conducted on 2 occasions on an area south of Las Vegas, Nevada, where the density of the test population was known. In the first trials, conducted largely by experienced biologists who had been involved in tortoise surveys for many years, the density of adult tortoise models was well estimated (-3.9% bias), while the bias was higher (-20%) for subadult tortoise models. The bias for combined data was -12.0%. The bias was largely attributed to the failure to detect all tortoise models on or near the transect centerline. The second trials were conducted with a group of largely inexperienced student volunteers and used somewhat different searching methods, and the results were similar to the first trials. Estimated combined density of subadult and adult tortoise models had a negative bias (-7.3%), again attributable to failure to detect some models on or near the centerline. Experience in desert tortoise biology, either comparing the first and second trials or in the second trial with 2 experienced biologists versus 16 novices, did not have an apparent effect on the quality of the data or the accuracy of the estimates. Observer training, specific to line transect sampling, and field testing are important components of a reliable survey. Line transect sampling represents a viable method for large-scale monitoring of populations of desert tortoise; however, field protocol must be improved to assure the key assumptions are met.

  2. Breeding value prediction for production traits in layer chickens using pedigree or genomic relationships in a reduced animal model.

    PubMed

    Wolc, Anna; Stricker, Chris; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Preisinger, Rudolf; Habier, David; Fernando, Rohan; Garrick, Dorian J; Lamont, Susan J; Dekkers, Jack C M

    2011-01-21

    Genomic selection involves breeding value estimation of selection candidates based on high-density SNP genotypes. To quantify the potential benefit of genomic selection, accuracies of estimated breeding values (EBV) obtained with different methods using pedigree or high-density SNP genotypes were evaluated and compared in a commercial layer chicken breeding line. The following traits were analyzed: egg production, egg weight, egg color, shell strength, age at sexual maturity, body weight, albumen height, and yolk weight. Predictions appropriate for early or late selection were compared. A total of 2,708 birds were genotyped for 23,356 segregating SNP, including 1,563 females with records. Phenotypes on relatives without genotypes were incorporated in the analysis (in total 13,049 production records).The data were analyzed with a Reduced Animal Model using a relationship matrix based on pedigree data or on marker genotypes and with a Bayesian method using model averaging. Using a validation set that consisted of individuals from the generation following training, these methods were compared by correlating EBV with phenotypes corrected for fixed effects, selecting the top 30 individuals based on EBV and evaluating their mean phenotype, and by regressing phenotypes on EBV. Using high-density SNP genotypes increased accuracies of EBV up to two-fold for selection at an early age and by up to 88% for selection at a later age. Accuracy increases at an early age can be mostly attributed to improved estimates of parental EBV for shell quality and egg production, while for other egg quality traits it is mostly due to improved estimates of Mendelian sampling effects. A relatively small number of markers was sufficient to explain most of the genetic variation for egg weight and body weight.

  3. The association of very-low-density lipoprotein with ankle-brachial index in peritoneal dialysis patients with controlled serum low-density lipoprotein cholesterol level

    PubMed Central

    2013-01-01

    Background Peripheral artery disease (PAD) represents atherosclerotic disease and is a risk factor for death in peritoneal dialysis (PD) patients, who tend to show an atherogenic lipid profile. In this study, we investigated the relationship between lipid profile and ankle-brachial index (ABI) as an index of atherosclerosis in PD patients with controlled serum low-density lipoprotein (LDL) cholesterol level. Methods Thirty-five PD patients, whose serum LDL cholesterol level was controlled at less than 120mg/dl, were enrolled in this cross-sectional study in Japan. The proportions of cholesterol level to total cholesterol level (cholesterol proportion) in 20 lipoprotein fractions and the mean size of lipoprotein particles were measured using an improved method, namely, high-performance gel permeation chromatography. Multivariate linear regression analysis was adjusted for diabetes mellitus and cardiovascular and/or cerebrovascular diseases. Results The mean (standard deviation) age was 61.6 (10.5) years; PD vintage, 38.5 (28.1) months; ABI, 1.07 (0.22). A low ABI (0.9 or lower) was observed in 7 patients (low-ABI group). The low-ABI group showed significantly higher cholesterol proportions in the chylomicron fraction and large very-low-density lipoproteins (VLDLs) (Fractions 3–5) than the high-ABI group (ABI>0.9). Adjusted multivariate linear regression analysis showed that ABI was negatively associated with serum VLDL cholesterol level (parameter estimate=-0.00566, p=0.0074); the cholesterol proportions in large VLDLs (Fraction 4, parameter estimate=-3.82, p=0.038; Fraction 5, parameter estimate=-3.62, p=0.0039) and medium VLDL (Fraction 6, parameter estimate=-3.25, p=0.014); and the size of VLDL particles (parameter estimate=-0.0352, p=0.032). Conclusions This study showed that the characteristics of VLDL particles were associated with ABI among PD patients. Lowering serum VLDL level may be an effective therapy against atherosclerosis in PD patients after the control of serum LDL cholesterol level. PMID:24093487

  4. Statistical field estimators for multiscale simulations.

    PubMed

    Eapen, Jacob; Li, Ju; Yip, Sidney

    2005-11-01

    We present a systematic approach for generating smooth and accurate fields from particle simulation data using the notions of statistical inference. As an extension to a parametric representation based on the maximum likelihood technique previously developed for velocity and temperature fields, a nonparametric estimator based on the principle of maximum entropy is proposed for particle density and stress fields. Both estimators are applied to represent molecular dynamics data on shear-driven flow in an enclosure which exhibits a high degree of nonlinear characteristics. We show that the present density estimator is a significant improvement over ad hoc bin averaging and is also free of systematic boundary artifacts that appear in the method of smoothing kernel estimates. Similarly, the velocity fields generated by the maximum likelihood estimator do not show any edge effects that can be erroneously interpreted as slip at the wall. For low Reynolds numbers, the velocity fields and streamlines generated by the present estimator are benchmarked against Newtonian continuum calculations. For shear velocities that are a significant fraction of the thermal speed, we observe a form of shear localization that is induced by the confining boundary.

  5. Ionosphere Profile Estimation Using Ionosonde & GPS Data in an Inverse Refraction Calculation

    NASA Astrophysics Data System (ADS)

    Psiaki, M. L.

    2014-12-01

    A method has been developed to assimilate ionosonde virtual heights and GPS slant TEC data to estimate the parameters of a local ionosphere model, including estimates of the topside and of latitude and longitude variations. This effort seeks to better assimilate a variety of remote sensing data in order to characterize local (and eventually regional and global) ionosphere electron density profiles. The core calculations involve a forward refractive ray-tracing solution and a nonlinear optimal estimation algorithm that inverts the forward model. The ray-tracing calculations solve a nonlinear two-point boundary value problem for the curved ionosonde or GPS ray path through a parameterized electron density profile. It implements a full 3D solution that can handle the case of a tilted ionosphere. These calculations use Hamiltonian equivalents of the Appleton-Hartree magneto-plasma refraction index model. The current ionosphere parameterization is a modified Booker profile. It has been augmented to include latitude and longitude dependencies. The forward ray-tracing solution yields a given signal's group delay and beat carrier phase observables. An auxiliary set of boundary value problem solutions determine the sensitivities of the ray paths and observables with respect to the parameters of the augmented Booker profile. The nonlinear estimation algorithm compares the measured ionosonde virtual-altitude observables and GPS slant-TEC observables to the corresponding values from the forward refraction model. It uses the parameter sensitivities of the model to iteratively improve its parameter estimates in a way the reduces the residual errors between the measurements and their modeled values. This method has been applied to data from HAARP in Gakona, AK and has produced good TEC and virtual height fits. It has been extended to characterize electron density perturbations caused by HAARP heating experiments through the use of GPS slant TEC data for an LOS through the heated zone. The next planned extension of the method is to estimate the parameters of a regional ionosphere profile. The input observables will be slant TEC from an array of GPS receivers and group delay and carrier phase observables from an array of high-frequency beacons. The beacon array will function as a sort of multi-static ionosonde.

  6. Parenchymal Texture Analysis in Digital Breast Tomosynthesis for Breast Cancer Risk Estimation: A Preliminary Study

    PubMed Central

    Kontos, Despina; Bakic, Predrag R.; Carton, Ann-Katherine; Troxel, Andrea B.; Conant, Emily F.; Maidment, Andrew D.A.

    2009-01-01

    Rationale and Objectives Studies have demonstrated a relationship between mammographic parenchymal texture and breast cancer risk. Although promising, texture analysis in mammograms is limited by tissue superimposition. Digital breast tomosynthesis (DBT) is a novel tomographic x-ray breast imaging modality that alleviates the effect of tissue superimposition, offering superior parenchymal texture visualization compared to mammography. Our study investigates the potential advantages of DBT parenchymal texture analysis for breast cancer risk estimation. Materials and Methods DBT and digital mammography (DM) images of 39 women were analyzed. Texture features, shown in studies with mammograms to correlate with cancer risk, were computed from the retroareolar breast region. We compared the relative performance of DBT and DM texture features in correlating with two measures of breast cancer risk: (i) the Gail and Claus risk estimates, and (ii) mammographic breast density. Linear regression was performed to model the association between texture features and increasing levels of risk. Results No significant correlation was detected between parenchymal texture and the Gail and Claus risk estimates. Significant correlations were observed between texture features and breast density. Overall, the DBT texture features demonstrated stronger correlations with breast percent density (PD) than DM (p ≤0.05). When dividing our study population in groups of increasing breast PD, the DBT texture features appeared to be more discriminative, having regression lines with overall lower p-values, steeper slopes, and higher R2 estimates. Conclusion Although preliminary, our results suggest that DBT parenchymal texture analysis could provide more accurate characterization of breast density patterns, which could ultimately improve breast cancer risk estimation. PMID:19201357

  7. Assimilation of thermospheric measurements for ionosphere-thermosphere state estimation

    NASA Astrophysics Data System (ADS)

    Miladinovich, Daniel S.; Datta-Barua, Seebany; Bust, Gary S.; Makela, Jonathan J.

    2016-12-01

    We develop a method that uses data assimilation to estimate ionospheric-thermospheric (IT) states during midlatitude nighttime storm conditions. The algorithm Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE) uses time-varying electron densities in the F region, derived primarily from total electron content data, to estimate two drivers of the IT: neutral winds and electric potential. A Kalman filter is used to update background models based on ingested plasma densities and neutral wind measurements. This is the first time a Kalman filtering technique is used with the EMPIRE algorithm and the first time neutral wind measurements from 630.0 nm Fabry-Perot interferometers (FPIs) are ingested to improve estimates of storm time ion drifts and neutral winds. The effects of assimilating remotely sensed neutral winds from FPI observations are studied by comparing results of ingesting: electron densities (N) only, N plus half the measurements from a single FPI, and then N plus all of the FPI data. While estimates of ion drifts and neutral winds based on N give estimates similar to the background models, this study's results show that ingestion of the FPI data can significantly change neutral wind and ion drift estimation away from background models. In particular, once neutral winds are ingested, estimated neutral winds agree more with validation wind data, and estimated ion drifts in the magnetic field-parallel direction are more sensitive to ingestion than the field-perpendicular zonal and meridional directions. Also, data assimilation with FPI measurements helps provide insight into the effects of contamination on 630.0 nm emissions experienced during geomagnetic storms.

  8. Estimating the potential biodiversity impact of redeveloping small urban spaces: the Natural History Museum’s grounds

    PubMed Central

    Knapp, Sandra; Purvis, Andy

    2017-01-01

    Background With the increase in human population, and the growing realisation of the importance of urban biodiversity for human wellbeing, the ability to predict biodiversity loss or gain as a result of land use change within urban settings is important. Most models that link biodiversity and land use are at too coarse a scale for informing decisions, especially those related to planning applications. Using the grounds of the Natural History Museum, London, we show how methods used in global models can be applied to smaller spatial scales to inform urban planning. Methods Data were extracted from relevant primary literature where species richness had been recorded in more than one habitat type within an urban setting. As within-sample species richness will increase with habitat area, species richness estimates were also converted to species density using theory based on the species–area relationship. Mixed-effects models were used to model the impact on species richness and species density of different habitat types, and to estimate these metrics in the current grounds and under proposed plans for redevelopment. We compared effects of three assumptions on how within-sample diversity scales with habitat area as a sensitivity analysis. A pre-existing database recording plants within the grounds was also used to estimate changes in species composition across different habitats. Results Analysis estimated that the proposed plans would result in an increase of average biodiversity of between 11.2% (when species density was modelled) and 14.1% (when within-sample species richness was modelled). Plant community composition was relatively similar between the habitats currently within the grounds. Discussion The proposed plans for change in the NHM grounds are estimated to result in a net gain in average biodiversity, through increased number and extent of high-diversity habitats. In future, our method could be improved by incorporating purposefully collected ecological survey data (if resources permit) and by expanding the data sufficiently to allow modelling of the temporal dynamics of biodiversity change after habitat disturbance and creation. Even in its current form, the method produces transparent quantitative estimates, grounded in ecological data and theory, which can be used to inform relatively small scale planning decisions. PMID:29104821

  9. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  10. Optimization of the lithium/thionyl chloride battery

    NASA Technical Reports Server (NTRS)

    White, Ralph E.

    1989-01-01

    A 1-D math model for the lithium/thionyl chloride primary cell is used in conjunction with a parameter estimation technique in order to estimate the electro-kinetic parameters of this electrochemical system. The electro-kinetic parameters include the anodic transfer coefficient and exchange current density of the lithium oxidation, alpha sub a,1 and i sub o,i,ref, the cathodic transfer coefficient and the effective exchange current density of the thionyl chloride reduction, alpha sub c,2 and a sup o i sub o,2,ref, and a morphology parameter, Xi. The parameter estimation is performed on simulated data first in order to gain confidence in the method. Data, reported in the literature, for a high rate discharge of an experimental lithium/thionyl chloride cell is used for an analysis.

  11. Erosion and Deposition Monitoring Using High-Density Aerial Lidar and Geomorphic Change Detection Software Analysis at Los Alamos National Laboratory, Los Alamos New Mexico, LA-UR-17-26743

    NASA Astrophysics Data System (ADS)

    Walker, T.; Kostrubala, T. L.; Muggleton, S. R.; Veenis, S.; Reid, K. D.; White, A. B.

    2017-12-01

    The Los Alamos National Laboratory storm water program installed sediment transport mitigation structures to reduce the migration of contaminants within the Los Alamos and Pueblo (LA/P) watershed in Los Alamos, NM. The goals of these structures are to minimize storm water runoff and erosion, enhance deposition, and reduce mobility of contaminated sediments. Previous geomorphological monitoring used GPS surveyed cross-sections on a reach scale to interpolate annual geomorphic change in sediment volumes. While monitoring has confirmed the LA/P watershed structures are performing as designed, the cross-section method proved difficult to estimate uncertainty and the coverage area was limited. A new method, using the Geomorphic Change Detection (GCD) plugin for ESRI ArcGIS developed by Wheaton et al. (2010), with high-density aerial lidar data, has been used to provide high confidence uncertainty estimates and greater areal coverage. Following the 2014 monsoon season, airborne lidar data has been collected annually and the resulting DEMs processed using the GCD method. Additionally, a more accurate characterization of low-amplitude geomorphic changes, typical of low-flow/low-rainfall monsoon years, has been documented by applying a spatially variable error to volume change calculations using the GCD based fuzzy inference system (FIS). The FIS method allows for the calculation of uncertainty based on data set quality and density e.g. point cloud density, ground slope, and degree of surface roughness. At the 95% confidence level, propagated uncertainty estimates of the 2015 and 2016 lidar DEM comparisons yielded detectable changes greater than 0.3 m - 0.46 m. Geomorphic processes identified and verified in the field are typified by low-amplitude, within-channel aggradation and incision and out of channel bank collapse that over the course of a monsoon season result in localized and dectetable change. While the resulting reach scale volume change from 2015 - 2016 was often nonsignificant, it is estimated with a higher degree of confidence than the previous cross-section/interpolation method. Results from comparisons of the recent low-intensity rainfalls/storm peak discharges monsoon season DEMs have established the expected amount of geomorphic change to be minor and localized, yet demonstrable.

  12. The charger transfer electronic coupling in diabatic perspective: A multi-state density functional theory study

    NASA Astrophysics Data System (ADS)

    Guo, Xinwei; Qu, Zexing; Gao, Jiali

    2018-01-01

    The multi-state density functional theory (MSDFT) provides a convenient way to estimate electronic coupling of charge transfer processes based on a diabatic representation. Its performance has been benchmarked against the HAB11 database with a mean unsigned error (MUE) of 17 meV between MSDFT and ab initio methods. The small difference may be attributed to different representations, diabatic from MSDFT and adiabatic from ab initio calculations. In this discussion, we conclude that MSDFT provides a general and efficient way to estimate the electronic coupling for charge-transfer rate calculations based on the Marcus-Hush model.

  13. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  14. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  15. Identification of modal parameters including unmeasured forces and transient effects

    NASA Astrophysics Data System (ADS)

    Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.

    2003-08-01

    In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.

  16. Inference about density and temporary emigration in unmarked populations

    USGS Publications Warehouse

    Chandler, Richard B.; Royle, J. Andrew; King, David I.

    2011-01-01

    Few species are distributed uniformly in space, and populations of mobile organisms are rarely closed with respect to movement, yet many models of density rely upon these assumptions. We present a hierarchical model allowing inference about the density of unmarked populations subject to temporary emigration and imperfect detection. The model can be fit to data collected using a variety of standard survey methods such as repeated point counts in which removal sampling, double-observer sampling, or distance sampling is used during each count. Simulation studies demonstrated that parameter estimators are unbiased when temporary emigration is either "completely random" or is determined by the size and location of home ranges relative to survey points. We also applied the model to repeated removal sampling data collected on Chestnut-sided Warblers (Dendroica pensylvancia) in the White Mountain National Forest, USA. The density estimate from our model, 1.09 birds/ha, was similar to an estimate of 1.11 birds/ha produced by an intensive spot-mapping effort. Our model is also applicable when processes other than temporary emigration affect the probability of being available for detection, such as in studies using cue counts. Functions to implement the model have been added to the R package unmarked.

  17. Regional model-based computerized ionospheric tomography using GPS measurements: IONOLAB-CIT

    NASA Astrophysics Data System (ADS)

    Tuna, Hakan; Arikan, Orhan; Arikan, Feza

    2015-10-01

    Three-dimensional imaging of the electron density distribution in the ionosphere is a crucial task for investigating the ionospheric effects. Dual-frequency Global Positioning System (GPS) satellite signals can be used to estimate the slant total electron content (STEC) along the propagation path between a GPS satellite and ground-based receiver station. However, the estimated GPS-STEC is very sparse and highly nonuniformly distributed for obtaining reliable 3-D electron density distributions derived from the measurements alone. Standard tomographic reconstruction techniques are not accurate or reliable enough to represent the full complexity of variable ionosphere. On the other hand, model-based electron density distributions are produced according to the general trends of ionosphere, and these distributions do not agree with measurements, especially for geomagnetically active hours. In this study, a regional 3-D electron density distribution reconstruction method, namely, IONOLAB-CIT, is proposed to assimilate GPS-STEC into physical ionospheric models. The proposed method is based on an iterative optimization framework that tracks the deviations from the ionospheric model in terms of F2 layer critical frequency and maximum ionization height resulting from the comparison of International Reference Ionosphere extended to Plasmasphere (IRI-Plas) model-generated STEC and GPS-STEC. The suggested tomography algorithm is applied successfully for the reconstruction of electron density profiles over Turkey, during quiet and disturbed hours of ionosphere using Turkish National Permanent GPS Network.

  18. Analysis of Ion Composition Estimation Accuracy for Incoherent Scatter Radars

    NASA Astrophysics Data System (ADS)

    Martínez Ledesma, M.; Diaz, M. A.

    2017-12-01

    The Incoherent Scatter Radar (ISR) is one of the most powerful sounding methods developed to estimate the Ionosphere. This radar system determines the plasma parameters by sending powerful electromagnetic pulses to the Ionosphere and analyzing the received backscatter. This analysis provides information about parameters such as electron and ion temperatures, electron densities, ion composition, and ion drift velocities. Nevertheless in some cases the ISR analysis has ambiguities in the determination of the plasma characteristics. It is of particular relevance the ion composition and temperature ambiguity obtained between the F1 and the lower F2 layers. In this case very similar signals are obtained with different mixtures of molecular ions (NO2+ and O2+) and atomic oxygen ions (O+), and consequently it is not possible to completely discriminate between them. The most common solution to solve this problem is the use of empirical or theoretical models of the ionosphere in the fitting of ambiguous data. More recent works take use of parameters estimated from the Plasma Line band of the radar to reduce the number of parameters to determine. In this work we propose to determine the error estimation of the ion composition ambiguity when using Plasma Line electron density measurements. The sensibility of the ion composition estimation has been also calculated depending on the accuracy of the ionospheric model, showing that the correct estimation is highly dependent on the capacity of the model to approximate the real values. Monte Carlo simulations of data fitting at different signal to noise (SNR) ratios have been done to obtain valid and invalid estimation probability curves. This analysis provides a method to determine the probability of erroneous estimation for different signal fluctuations. Also it can be used as an empirical method to compare the efficiency of the different algorithms and methods on when solving the ion composition ambiguity.

  19. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  20. TH-CD-202-06: A Method for Characterizing and Validating Dynamic Lung Density Change During Quiet Respiration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, T; Ruan, D; Heinrich, M

    2016-06-15

    Purpose: To obtain a functional relationship that calibrates the lung tissue density change under free breathing conditions through correlating Jacobian values to the Hounsfield units. Methods: Free-breathing lung computed tomography images were acquired using a fast helical CT protocol, where 25 scans were acquired per patient. Using a state-of-the-art deformable registration algorithm, a set of the deformation vector fields (DVF) was generated to provide spatial mapping from the reference image geometry to the other free-breathing scans. These DVFs were used to generate Jacobian maps, which estimate voxelwise volume change. Subsequently, the set of 25 corresponding Jacobian and voxel intensity inmore » Hounsfield units (HU) were collected and linear regression was performed based on the mass conservation relationship to correlate the volume change to density change. Based on the resulting fitting coefficients, the tissues were classified into parenchymal (Type I), vascular (Type II), and soft tissue (Type III) types. These coefficients modeled the voxelwise density variation during quiet breathing. The accuracy of the proposed method was assessed using mean absolute difference in HU between the CT scan intensities and the model predicted values. In addition, validation experiments employing a leave-five-out method were performed to evaluate the model accuracy. Results: The computed mean model errors were 23.30±9.54 HU, 29.31±10.67 HU, and 35.56±20.56 HU, respectively, for regions I, II, and III, respectively. The cross validation experiments averaged over 100 trials had mean errors of 30.02 ± 1.67 HU over the entire lung. These mean values were comparable with the estimated CT image background noise. Conclusion: The reported validation experiment statistics confirmed the lung density modeling during free breathing. The proposed technique was general and could be applied to a wide range of problem scenarios where accurate dynamic lung density information is needed. This work was supported in part by NIH R01 CA0096679.« less

Top