Breast density estimation from high spectral and spatial resolution MRI
Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.
2016-01-01
Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (p<0.0001) was obtained between left and right breast density estimations. An interclass correlation coefficient of 0.99 (p<0.0001) indicated high reliability for the inter-user variability of the HiSS-based breast density estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590
Estimation of density of mongooses with capture-recapture and distance sampling
Corn, J.L.; Conroy, M.J.
1998-01-01
We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.
Camera traps and activity signs to estimate wild boar density and derive abundance indices.
Massei, Giovanna; Coats, Julia; Lambert, Mark Simon; Pietravalle, Stephane; Gill, Robin; Cowan, Dave
2018-04-01
Populations of wild boar and feral pigs are increasing worldwide, in parallel with their significant environmental and economic impact. Reliable methods of monitoring trends and estimating abundance are needed to measure the effects of interventions on population size. The main aims of this study, carried out in five English woodlands were: (i) to compare wild boar abundance indices obtained from camera trap surveys and from activity signs; and (ii) to assess the precision of density estimates in relation to different densities of camera traps. For each woodland, we calculated a passive activity index (PAI) based on camera trap surveys, rooting activity and wild boar trails on transects, and estimated absolute densities based on camera trap surveys. PAIs obtained using different methods showed similar patterns. We found significant between-year differences in abundance of wild boar using PAIs based on camera trap surveys and on trails on transects, but not on signs of rooting on transects. The density of wild boar from camera trap surveys varied between 0.7 and 7 animals/km 2 . Increasing the density of camera traps above nine per km 2 did not increase the precision of the estimate of wild boar density. PAIs based on number of wild boar trails and on camera trap data appear to be more sensitive to changes in population size than PAIs based on signs of rooting. For wild boar densities similar to those recorded in this study, nine camera traps per km 2 are sufficient to estimate the mean density of wild boar. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi
2015-01-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
Seth Ex; Frederick Smith; Tara Keyser; Stephanie Rebain
2017-01-01
The Forest Vegetation Simulator Fire and Fuels Extension (FFE-FVS) is often used to estimate canopy bulk density (CBD) and canopy base height (CBH), which are key indicators of crown fire hazard for conifer stands in the Western United States. Estimated CBD from FFE-FVS is calculated as the maximum 4 m running mean bulk density of predefined 0.3 m thick canopy layers (...
Estimating canopy bulk density and canopy base height for interior western US conifer stands
Seth A. Ex; Frederick W. Smith; Tara L. Keyser; Stephanie A. Rebain
2016-01-01
Crown fire hazard is often quantified using effective canopy bulk density (CBD) and canopy base height (CBH). When CBD and CBH are estimated using nonlocal crown fuel biomass allometries and uniform crown fuel distribution assumptions, as is common practice, values may differ from estimates made using local allometries and nonuniform...
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
Demonstration of line transect methodologies to estimate urban gray squirrel density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hein, E.W.
1997-11-01
Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimatingmore » urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.« less
Evaluation of line transect sampling based on remotely sensed data from underwater video
Bergstedt, R.A.; Anderson, D.R.
1990-01-01
We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.
Crajé, Céline; Santello, Marco; Gordon, Andrew M
2013-01-01
Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.
Davis, Amy J; Leland, Bruce; Bodenchuk, Michael; VerCauteren, Kurt C; Pepin, Kim M
2017-06-01
Population density is a key driver of disease dynamics in wildlife populations. Accurate disease risk assessment and determination of management impacts on wildlife populations requires an ability to estimate population density alongside management actions. A common management technique for controlling wildlife populations to monitor and mitigate disease transmission risk is trapping (e.g., box traps, corral traps, drop nets). Although abundance can be estimated from trapping actions using a variety of analytical approaches, inference is limited by the spatial extent to which a trap attracts animals on the landscape. If the "area of influence" were known, abundance estimates could be converted to densities. In addition to being an important predictor of contact rate and thus disease spread, density is more informative because it is comparable across sites of different sizes. The goal of our study is to demonstrate the importance of determining the area sampled by traps (area of influence) so that density can be estimated from management-based trapping designs which do not employ a trapping grid. To provide one example of how area of influence could be calculated alongside management, we conducted a small pilot study on wild pigs (Sus scrofa) using two removal methods 1) trapping followed by 2) aerial gunning, at three sites in northeast Texas in 2015. We estimated abundance from trapping data with a removal model. We calculated empirical densities as aerial counts divided by the area searched by air (based on aerial flight tracks). We inferred the area of influence of traps by assuming consistent densities across the larger spatial scale and then solving for area impacted by the traps. Based on our pilot study we estimated the area of influence for corral traps in late summer in Texas to be ∼8.6km 2 . Future work showing the effects of behavioral and environmental factors on area of influence will help mangers obtain estimates of density from management data, and determine conditions where trap-attraction is strongest. The ability to estimate density alongside population control activities will improve risk assessment and response operations against disease outbreaks. Published by Elsevier B.V.
Adjusting forest density estimates for surveyor bias in historical tree surveys
Brice B. Hanberry; Jian Yang; John M. Kabrick; Hong S. He
2012-01-01
The U.S. General Land Office surveys, conducted between the late 1700s to early 1900s, provide records of trees prior to widespread European and American colonial settlement. However, potential and documented surveyor bias raises questions about the reliability of historical tree density estimates and other metrics based on density estimated from these records. In this...
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len
2016-01-01
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.
Estimation of tiger densities in India using photographic captures and recaptures
Karanth, U.; Nichols, J.D.
1998-01-01
Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.
Ant-inspired density estimation via random walks.
Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A
2017-10-03
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Automated skin lesion segmentation with kernel density estimation
NASA Astrophysics Data System (ADS)
Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.
2017-07-01
Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.
Gradient-based stochastic estimation of the density matrix
NASA Astrophysics Data System (ADS)
Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton
2018-03-01
Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.
Ant-inspired density estimation via random walks
Musco, Cameron; Su, Hsin-Hao
2017-01-01
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146
NASA Astrophysics Data System (ADS)
Vielberg, Kristin; Forootan, Ehsan; Lück, Christina; Löcher, Anno; Kusche, Jürgen; Börger, Klaus
2018-05-01
Ultra-sensitive space-borne accelerometers on board of low Earth orbit (LEO) satellites are used to measure non-gravitational forces acting on the surface of these satellites. These forces consist of the Earth radiation pressure, the solar radiation pressure and the atmospheric drag, where the first two are caused by the radiation emitted from the Earth and the Sun, respectively, and the latter is related to the thermospheric density. On-board accelerometer measurements contain systematic errors, which need to be mitigated by applying a calibration before their use in gravity recovery or thermospheric neutral density estimations. Therefore, we improve, apply and compare three calibration procedures: (1) a multi-step numerical estimation approach, which is based on the numerical differentiation of the kinematic orbits of LEO satellites; (2) a calibration of accelerometer observations within the dynamic precise orbit determination procedure and (3) a comparison of observed to modeled forces acting on the surface of LEO satellites. Here, accelerometer measurements obtained by the Gravity Recovery And Climate Experiment (GRACE) are used. Time series of bias and scale factor derived from the three calibration procedures are found to be different in timescales of a few days to months. Results are more similar (statistically significant) when considering longer timescales, from which the results of approach (1) and (2) show better agreement to those of approach (3) during medium and high solar activity. Calibrated accelerometer observations are then applied to estimate thermospheric neutral densities. Differences between accelerometer-based density estimations and those from empirical neutral density models, e.g., NRLMSISE-00, are observed to be significant during quiet periods, on average 22 % of the simulated densities (during low solar activity), and up to 28 % during high solar activity. Therefore, daily corrections are estimated for neutral densities derived from NRLMSISE-00. Our results indicate that these corrections improve model-based density simulations in order to provide density estimates at locations outside the vicinity of the GRACE satellites, in particular during the period of high solar/magnetic activity, e.g., during the St. Patrick's Day storm on 17 March 2015.
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Irigoyen, Alejo J; Rojo, Irene; Calò, Antonio; Trobbiani, Gastón; Sánchez-Carnero, Noela; García-Charton, José A
2018-01-01
Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1-31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC.
2018-01-01
Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer’s experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1–31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC. PMID:29324887
Block, Robert C; Abdolahi, Amir; Niemiec, Christopher P; Rigby, C Scott; Williams, Geoffrey C
2016-12-01
There is a lack of research on the use of electronic tools that guide patients toward reducing their cardiovascular disease risk. We conducted a 9-month clinical trial in which participants who were at low (n = 100) and moderate (n = 23) cardiovascular disease risk-based on the National Cholesterol Education Program III's 10-year risk estimator-were randomized to usual care or to usual care plus use of an Interactive Cholesterol Advisory Tool during the first 8 weeks of the study. In the moderate-risk category, an interaction between treatment condition and Framingham risk estimate on low-density lipoprotein and non-high-density lipoprotein cholesterol was observed, such that participants in the virtual clinician treatment condition had a larger reduction in low-density lipoprotein and non-high-density lipoprotein cholesterol as their Framingham risk estimate increased. Perceptions of the Interactive Cholesterol Advisory Tool were positive. Evidence-based information about cardiovascular disease risk and its management was accessible to participants without major technical challenges. © The Author(s) 2015.
A Simple Ground-Based Trap For Estimating Densities of Arboreal Leaf Insects
Robert A. Haack; Richard W. Blank
1991-01-01
Describes a trap design to use in collecting larval frass or head capsules for estimating densities of aboveground arthropods. The trap is light, compact, durable, and easily constructed from common inexpensive items.
NASA Astrophysics Data System (ADS)
Kontos, Despina; Xing, Ye; Bakic, Predrag R.; Conant, Emily F.; Maidment, Andrew D. A.
2010-03-01
We performed a study to compare methods for volumetric breast density estimation in digital mammography (DM) and magnetic resonance imaging (MRI) for a high-risk population of women. DM and MRI images of the unaffected breast from 32 women with recently detected abnormalities and/or previously diagnosed breast cancer (age range 31-78 yrs, mean 50.3 yrs) were retrospectively analyzed. DM images were analyzed using QuantraTM (Hologic Inc). The MRI images were analyzed using a fuzzy-C-means segmentation algorithm on the T1 map. Both methods were compared to Cumulus (Univ. Toronto). Volumetric breast density estimates from DM and MRI are highly correlated (r=0.90, p<=0.001). The correlation between the volumetric and the area-based density measures is lower and depends on the training background of the Cumulus software user (r=0.73-84, p<=0.001). In terms of absolute values, MRI provides the lowest volumetric estimates (mean=14.63%), followed by the DM volumetric (mean=22.72%) and area-based measures (mean=29.35%). The MRI estimates of the fibroglandular volume are statistically significantly lower than the DM estimates for women with very low-density breasts (p<=0.001). We attribute these differences to potential partial volume effects in MRI and differences in the computational aspects of the image analysis methods in MRI and DM. The good correlation between the volumetric and the area-based measures, shown to correlate with breast cancer risk, suggests that both DM and MRI volumetric breast density measures can aid in breast cancer risk assessment. Further work is underway to fully-investigate the association between volumetric breast density measures and breast cancer risk.
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
NASA Astrophysics Data System (ADS)
Helge Østerås, Bjørn; Skaane, Per; Gullien, Randi; Catrine Trægde Martinsen, Anne
2018-02-01
The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra™). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra™. AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.
Østerås, Bjørn Helge; Skaane, Per; Gullien, Randi; Martinsen, Anne Catrine Trægde
2018-01-25
The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra ™ ). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra ™ . AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.
Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard
2018-01-01
Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
NASA Technical Reports Server (NTRS)
Moran, M. Susan; Jackson, Ray D.; Raymond, Lee H.; Gay, Lloyd W.; Slater, Philip N.
1989-01-01
Surface energy balance components were evaluated by combining satellite-based spectral data with on-site measurements of solar irradiance, air temperature, wind speed, and vapor pressure. Maps of latent heat flux density and net radiant flux density were produced using Landsat TM data for three dates. The TM-based estimates differed from Bowen-ratio and aircraft-based estimates by less than 12 percent over mature fields of cotton, wheat, and alfalfa.
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
Estimating animal population density using passive acoustics.
Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L
2013-05-01
Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. © 2012 The Authors. Biological Reviews © 2012 Cambridge Philosophical Society.
Estimating animal population density using passive acoustics
Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L
2013-01-01
Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144
Estimation of Reineke and Volume-Based Maximum Size-Density Lines For Shortleaf Pine
Thomas B. Lynch; Robert F. Wittwer; Douglas J. Stevenson
2004-01-01
Maximum size-density relationships for Reineke's stand density index as well as for a relationship based on average tree volume were fitted to data from more than a decade of annual remeasurements of plots in unthinned naturally occurring shor tleaf pine in southeaster n Oklahoma. Reineke's stand density index is based on a maximum line of the form log(N) = a...
Broekhuis, Femke; Gopalaswamy, Arjun M.
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614
Broekhuis, Femke; Gopalaswamy, Arjun M
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.
Passive acoustic monitoring of beaked whale densities in the Gulf of Mexico.
Hildebrand, John A; Baumann-Pickering, Simone; Frasier, Kaitlin E; Trickey, Jennifer S; Merkens, Karlina P; Wiggins, Sean M; McDonald, Mark A; Garrison, Lance P; Harris, Danielle; Marques, Tiago A; Thomas, Len
2015-11-12
Beaked whales are deep diving elusive animals, difficult to census with conventional visual surveys. Methods are presented for the density estimation of beaked whales, using passive acoustic monitoring data collected at sites in the Gulf of Mexico (GOM) from the period during and following the Deepwater Horizon oil spill (2010-2013). Beaked whale species detected include: Gervais' (Mesoplodon europaeus), Cuvier's (Ziphius cavirostris), Blainville's (Mesoplodon densirostris) and an unknown species of Mesoplodon sp. (designated as Beaked Whale Gulf - BWG). For Gervais' and Cuvier's beaked whales, we estimated weekly animal density using two methods, one based on the number of echolocation clicks, and another based on the detection of animal groups during 5 min time-bins. Density estimates derived from these two methods were in good general agreement. At two sites in the western GOM, Gervais' beaked whales were present throughout the monitoring period, but Cuvier's beaked whales were present only seasonally, with periods of low density during the summer and higher density in the winter. At an eastern GOM site, both Gervais' and Cuvier's beaked whales had a high density throughout the monitoring period.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery
Liu, Shouyang; Baret, Fred; Andrieu, Bruno; Burger, Philippe; Hemmerlé, Matthieu
2017-01-01
Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages. PMID:28559901
Domke, Grant M.; Woodall, Christopher W.; Walters, Brian F.; Smith, James E.
2013-01-01
The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.’s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events. PMID:23544112
Domke, Grant M; Woodall, Christopher W; Walters, Brian F; Smith, James E
2013-01-01
The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.'s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events.
Temporal variation in bird counts within a Hawaiian rainforest
Simon, John C.; Pratt, T.K.; Berlin, Kim E.; Kowalsky, James R.; Fancy, S.G.; Hatfield, J.S.
2002-01-01
We studied monthly and annual variation in density estimates of nine forest bird species along an elevational gradient in an east Maui rainforest. We conducted monthly variable circular-plot counts for 36 consecutive months along transects running downhill from timberline. Density estimates were compared by month, year, and station for all resident bird species with sizeable populations, including four native nectarivores, two native insectivores, a non-native insectivore, and two non-native generalists. We compared densities among three elevational strata and between breeding and nonbreeding seasons. All species showed significant differences in density estimates among months and years. Three native nectarivores had higher density estimates within their breeding season (December-May) and showed decreases during periods of low nectar production following the breeding season. All insectivore and generalist species except one had higher density estimates within their March-August breeding season. Density estimates also varied with elevation for all species, and for four species a seasonal shift in population was indicated. Our data show that the best time to conduct counts for native forest birds on Maui is January-February, when birds are breeding or preparing to breed, counts are typically high, variability in density estimates is low, and the likelihood for fair weather is best. Temporal variations in density estimates documented in our study site emphasize the need for consistent, well-researched survey regimens and for caution when drawing conclusions from, or basing management decisions on, survey data.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Bernard R. Parresol; Charles E. Thomas
1996-01-01
In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Conditional Density Estimation with HMM Based Support Vector Machines
NASA Astrophysics Data System (ADS)
Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang
Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.
Wen, Xiaotong; Rangarajan, Govindan; Ding, Mingzhou
2013-01-01
Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix. PMID:23858479
A parametric generalization of the Hayne estimator for line transect sampling
Burnham, Kenneth P.
1979-01-01
The Hayne model for line transect sampling is generalized by using an elliptical (rather than circular) flushing model for animal detection. By assuming the ration of major and minor axes lengths is constant for all animals, a model results which allows estimation of population density based directly upon sighting distances and sighting angles. The derived estimator of animal density is a generalization of the Hayne estimator for line transect sampling.
USDA-ARS?s Scientific Manuscript database
The goal of our study was to estimate the prevalence of osteoporosis and low bone mass based on bone mineral density (BMD) at the femoral neck and the lumbar spine in adults 50 years and older in the United States (US). We applied prevalence estimates of osteoporosis or low bone mass at the femoral ...
Estimating peer density effects on oral health for community-based older adults.
Chakraborty, Bibhas; Widener, Michael J; Mirzaei Salehabadi, Sedigheh; Northridge, Mary E; Kum, Susan S; Jin, Zhu; Kunzel, Carol; Palmer, Harvey D; Metcalf, Sara S
2017-12-29
As part of a long-standing line of research regarding how peer density affects health, researchers have sought to understand the multifaceted ways that the density of contemporaries living and interacting in proximity to one another influence social networks and knowledge diffusion, and subsequently health and well-being. This study examined peer density effects on oral health for racial/ethnic minority older adults living in northern Manhattan and the Bronx, New York, NY. Peer age-group density was estimated by smoothing US Census data with 4 kernel bandwidths ranging from 0.25 to 1.50 mile. Logistic regression models were developed using these spatial measures and data from the ElderSmile oral and general health screening program that serves predominantly racial/ethnic minority older adults at community centers in northern Manhattan and the Bronx. The oral health outcomes modeled as dependent variables were ordinal dentition status and binary self-rated oral health. After construction of kernel density surfaces and multiple imputation of missing data, logistic regression analyses were performed to estimate the effects of peer density and other sociodemographic characteristics on the oral health outcomes of dentition status and self-rated oral health. Overall, higher peer density was associated with better oral health for older adults when estimated using smaller bandwidths (0.25 and 0.50 mile). That is, statistically significant relationships (p < 0.01) between peer density and improved dentition status were found when peer density was measured assuming a more local social network. As with dentition status, a positive significant association was found between peer density and fair or better self-rated oral health when peer density was measured assuming a more local social network. This study provides novel evidence that the oral health of community-based older adults is affected by peer density in an urban environment. To the extent that peer density signifies the potential for social interaction and support, the positive significant effects of peer density on improved oral health point to the importance of place in promoting social interaction as a component of healthy aging. Proximity to peers and their knowledge of local resources may facilitate utilization of community-based oral health care.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina
2012-03-01
Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.
Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.
2005-01-01
The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important, such as fire prone forests.
Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen
2012-01-01
Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
Passive acoustic monitoring of beaked whale densities in the Gulf of Mexico
Hildebrand, John A.; Baumann-Pickering, Simone; Frasier, Kaitlin E.; Trickey, Jennifer S.; Merkens, Karlina P.; Wiggins, Sean M.; McDonald, Mark A.; Garrison, Lance P.; Harris, Danielle; Marques, Tiago A.; Thomas, Len
2015-01-01
Beaked whales are deep diving elusive animals, difficult to census with conventional visual surveys. Methods are presented for the density estimation of beaked whales, using passive acoustic monitoring data collected at sites in the Gulf of Mexico (GOM) from the period during and following the Deepwater Horizon oil spill (2010–2013). Beaked whale species detected include: Gervais’ (Mesoplodon europaeus), Cuvier’s (Ziphius cavirostris), Blainville’s (Mesoplodon densirostris) and an unknown species of Mesoplodon sp. (designated as Beaked Whale Gulf — BWG). For Gervais’ and Cuvier’s beaked whales, we estimated weekly animal density using two methods, one based on the number of echolocation clicks, and another based on the detection of animal groups during 5 min time-bins. Density estimates derived from these two methods were in good general agreement. At two sites in the western GOM, Gervais’ beaked whales were present throughout the monitoring period, but Cuvier’s beaked whales were present only seasonally, with periods of low density during the summer and higher density in the winter. At an eastern GOM site, both Gervais’ and Cuvier’s beaked whales had a high density throughout the monitoring period. PMID:26559743
Toward accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2017-08-01
Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and policy decisions. © 2016 Society for Conservation Biology.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Density estimates of monarch butterflies overwintering in central Mexico
Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031
Density estimates of monarch butterflies overwintering in central Mexico
Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.
Unfolding sphere size distributions with a density estimator based on Tikhonov regularization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weese, J.; Korat, E.; Maier, D.
1997-12-01
This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
Deep convolutional neural network for mammographic density segmentation
NASA Astrophysics Data System (ADS)
Wei, Jun; Li, Songfeng; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir; Samala, Ravi K.
2018-02-01
Breast density is one of the most significant factors for cancer risk. In this study, we proposed a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammography (DM). The deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD). PD was calculated as the ratio of the dense area to the breast area based on the probability of each pixel belonging to dense region or fatty region at a decision threshold of 0.5. The DCNN estimate was compared to a feature-based statistical learning approach, in which gray level, texture and morphological features were extracted from each ROI and the least absolute shrinkage and selection operator (LASSO) was used to select and combine the useful features to generate the PMD. The reference PD of each image was provided by two experienced MQSA radiologists. With IRB approval, we retrospectively collected 347 DMs from patient files at our institution. The 10-fold cross-validation results showed a strong correlation r=0.96 between the DCNN estimation and interactive segmentation by radiologists while that of the feature-based statistical learning approach vs radiologists' segmentation had a correlation r=0.78. The difference between the segmentation by DCNN and by radiologists was significantly smaller than that between the feature-based learning approach and radiologists (p < 0.0001) by two-tailed paired t-test. This study demonstrated that the DCNN approach has the potential to replace radiologists' interactive thresholding in PD estimation on DMs.
Trunk density profile estimates from dual X-ray absorptiometry.
Wicke, Jason; Dumas, Geneviève A; Costigan, Patrick A
2008-01-01
Accurate body segment parameters are necessary to estimate joint loads when using biomechanical models. Geometric methods can provide individualized data for these models but the accuracy of the geometric methods depends on accurate segment density estimates. The trunk, which is important in many biomechanical models, has the largest variability in density along its length. Therefore, the objectives of this study were to: (1) develop a new method for modeling trunk density profiles based on dual X-ray absorptiometry (DXA) and (2) develop a trunk density function for college-aged females and males that can be used in geometric methods. To this end, the density profiles of 25 females and 24 males were determined by combining the measurements from a photogrammetric method and DXA readings. A discrete Fourier transformation was then used to develop the density functions for each sex. The individual density and average density profiles compare well with the literature. There were distinct differences between the profiles of two of participants (one female and one male), and the average for their sex. It is believed that the variations in these two participants' density profiles were a result of the amount and distribution of fat they possessed. Further studies are needed to support this possibility. The new density functions eliminate the uniform density assumption associated with some geometric models thus providing more accurate trunk segment parameter estimates. In turn, more accurate moments and forces can be estimated for the kinetic analyses of certain human movements.
Soil Bulk Density by Soil Type, Land Use and Data Source: Putting the Error in SOC Estimates
NASA Astrophysics Data System (ADS)
Wills, S. A.; Rossi, A.; Loecke, T.; Ramcharan, A. M.; Roecker, S.; Mishra, U.; Waltman, S.; Nave, L. E.; Williams, C. O.; Beaudette, D.; Libohova, Z.; Vasilas, L.
2017-12-01
An important part of SOC stock and pool assessment is the assessment, estimation, and application of bulk density estimates. The concept of bulk density is relatively simple (the mass of soil in a given volume), the specifics Bulk density can be difficult to measure in soils due to logistical and methodological constraints. While many estimates of SOC pools use legacy data in their estimates, few concerted efforts have been made to assess the process used to convert laboratory carbon concentration measurements and bulk density collection into volumetrically based SOC estimates. The methodologies used are particularly sensitive in wetlands and organic soils with high amounts of carbon and very low bulk densities. We will present an analysis across four database measurements: NCSS - the National Cooperative Soil Survey Characterization dataset, RaCA - the Rapid Carbon Assessment sample dataset, NWCA - the National Wetland Condition Assessment, and ISCN - the International soil Carbon Network. The relationship between bulk density and soil organic carbon will be evaluated by dataset and land use/land cover information. Prediction methods (both regression and machine learning) will be compared and contrasted across datasets and available input information. The assessment and application of bulk density, including modeling, aggregation and error propagation will be evaluated. Finally, recommendations will be made about both the use of new data in soil survey products (such as SSURGO) and the use of that information as legacy data in SOC pool estimates.
Computerized image analysis: estimation of breast density on mammograms
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
2000-06-01
An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.
NASA Astrophysics Data System (ADS)
Bormann, K.; Painter, T. H.; Marks, D. G.; Kirchner, P. B.; Winstral, A. H.; Ramirez, P.; Goodale, C. E.; Richardson, M.; Berisford, D. F.
2014-12-01
In the western US, snowmelt from the mountains contribute the vast majority of fresh water supply, in an otherwise dry region. With much of California currently experiencing extreme drought, it is critical for water managers to have accurate basin-wide estimations of snow water content during the spring melt season. At the forefront of basin-scale snow monitoring is the Jet Propulsion Laboratory's Airborne Snow Observatory (ASO). With combined LiDAR /spectrometer instruments and weekly flights over key basins throughout California, the ASO suite is capable of retrieving high-resolution basin-wide snow depth and albedo observations. To make best use of these high-resolution snow depths, spatially distributed snow density data are required to leverage snow water equivalent (SWE) from the measured depths. Snow density is a spatially and temporally variable property and is difficult to estimate at basin scales. Currently, ASO uses a physically based snow model (iSnobal) to resolve distributed snow density dynamics across the basin. However, there are issues with the density algorithms in iSnobal, particularly with snow depths below 0.50 m. This shortcoming limited the use of snow density fields from iSnobal during the poor snowfall year of 2014 in the Sierra Nevada, where snow depths were generally low. A deeper understanding of iSnobal model performance and uncertainty for snow density estimation is required. In this study, the model is compared to an existing climate-based statistical method for basin-wide snow density estimation in the Tuolumne basin in the Sierra Nevada and sparse field density measurements. The objective of this study is to improve the water resource information provided to water managers during ASO operation in the future by reducing the uncertainty introduced during the snow depth to SWE conversion.
Estimated areal extent of colonies of black-tailed prairie dogs in the northern Great Plains
Sidle, John G.; Johnson, Douglas H.; Euliss, Betty R.
2001-01-01
During 1997–1998, we undertook an aerial survey, with an aerial line-intercept technique, to estimate the extent of colonies of black-tailed prairie dogs (Cynomys ludovicianus) in the northern Great Plains states of Nebraska, North Dakota, South Dakota, and Wyoming. We stratified the survey based on knowledge of colony locations, computed 2 types of estimates for each stratum, and combined ratio estimates for high-density strata with average density estimates for low-density strata. Estimates of colony areas for black-tailed prairie dogs were derived from the average percentages of lines intercepting prairie dog colonies and ratio estimators. We selected the best estimator based on the correlation between length of transect line and length of intercepted colonies. Active colonies of black-tailed prairie dogs occupied 2,377.8 km2 ± 186.4 SE, whereas inactive colonies occupied 560.4 ± 89.2 km2. These data represent the 1st quantitative assessment of black-tailed prairie dog colonies in the northern Great Plains. The survey dispels popular notions that millions of hectares of colonies of black-tailed prairie dogs exist in the northern Great Plains and can form the basis for future survey efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Robert N; White, Devin A; Urban, Marie L
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less
Can You Tell the Density of the Watermelon from This Photograph?
ERIC Educational Resources Information Center
Foong, See Kit; Lim, Chim Chai
2010-01-01
Based on a photograph, the density of a watermelon floating in a pail of water is estimated with different levels of simplification--with and without consideration of refraction and three-dimensional effects. The watermelon was approximated as a sphere. The results of the theoretical estimations were verified experimentally. (Contains 6 figures.)
Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David
2012-12-01
The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.
NASA Astrophysics Data System (ADS)
Calabia, Andres; Jin, Shuanggen
2017-02-01
The thermospheric mass density variations and the thermosphere-ionosphere coupling during geomagnetic storms are not clear due to lack of observables and large uncertainty in the models. Although accelerometers on-board Low-Orbit-Earth (LEO) satellites can measure non-gravitational accelerations and derive thermospheric mass density variations with unprecedented details, their measurements are not always available (e.g., for the March 2013 geomagnetic storm). In order to cover accelerometer data gaps of Gravity Recovery and Climate Experiment (GRACE), we estimate thermospheric mass densities from numerical derivation of GRACE determined precise orbit ephemeris (POE) for the period 2011-2016. Our results show good correlation with accelerometer-based mass densities, and a better estimation than the NRLMSISE00 empirical model. Furthermore, we statistically analyze the differences to accelerometer-based densities, and study the March 2013 geomagnetic storm response. The thermospheric density enhancements at the polar regions on 17 March 2013 are clearly represented by POE-based measurements. Although our results show density variations better correlate with Dst and k-derived geomagnetic indices, the auroral electroject activity index AE as well as the merging electric field Em picture better agreement at high latitude for the March 2013 geomagnetic storm. On the other side, low-latitude variations are better represented with the Dst index. With the increasing resolution and accuracy of Precise Orbit Determination (POD) products and LEO satellites, the straightforward technique of determining non-gravitational accelerations and thermospheric mass densities through numerical differentiation of POE promises potentially good applications for the upper atmosphere research community.
NASA Astrophysics Data System (ADS)
Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini
2014-03-01
Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.
A Comparative Evaluation of Anomaly Detection Algorithms for Maritime Video Surveillance
2011-01-01
of k-means clustering and the k- NN Localized p-value Estimator ( KNN -LPE). K-means is a popular distance-based clustering algorithm while KNN -LPE...implemented the sparse cluster identification rule we described in Section 3.1. 2. k-NN Localized p-value Estimator ( KNN -LPE): We implemented this using...Average Density ( KNN -NAD): This was implemented as described in Section 3.4. Algorithm Parameter Settings The global and local density-based anomaly
Precision Orbit Derived Atmospheric Density: Development and Performance
NASA Astrophysics Data System (ADS)
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
Morin, Dana J.; Fuller, Angela K.; Royle, J. Andrew; Sutherland, Chris
2017-01-01
Conservation and management of spatially structured populations is challenging because solutions must consider where individuals are located, but also differential individual space use as a result of landscape heterogeneity. A recent extension of spatial capture–recapture (SCR) models, the ecological distance model, uses spatial encounter histories of individuals (e.g., a record of where individuals are detected across space, often sequenced over multiple sampling occasions), to estimate the relationship between space use and characteristics of a landscape, allowing simultaneous estimation of both local densities of individuals across space and connectivity at the scale of individual movement. We developed two model-based estimators derived from the SCR ecological distance model to quantify connectivity over a continuous surface: (1) potential connectivity—a metric of the connectivity of areas based on resistance to individual movement; and (2) density-weighted connectivity (DWC)—potential connectivity weighted by estimated density. Estimates of potential connectivity and DWC can provide spatial representations of areas that are most important for the conservation of threatened species, or management of abundant populations (i.e., areas with high density and landscape connectivity), and thus generate predictions that have great potential to inform conservation and management actions. We used a simulation study with a stationary trap design across a range of landscape resistance scenarios to evaluate how well our model estimates resistance, potential connectivity, and DWC. Correlation between true and estimated potential connectivity was high, and there was positive correlation and high spatial accuracy between estimated DWC and true DWC. We applied our approach to data collected from a population of black bears in New York, and found that forested areas represented low levels of resistance for black bears. We demonstrate that formal inference about measures of landscape connectivity can be achieved from standard methods of studying animal populations which yield individual encounter history data such as camera trapping. Resulting biological parameters including resistance, potential connectivity, and DWC estimate the spatial distribution and connectivity of the population within a statistical framework, and we outline applications to many possible conservation and management problems.
NASA Astrophysics Data System (ADS)
Freeman, P. E.; Izbicki, R.; Lee, A. B.
2017-07-01
Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.
Ku, Bon Ki; Evans, Douglas E.
2015-01-01
For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard’s estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles. PMID:26526560
Ku, Bon Ki; Evans, Douglas E
2012-04-01
For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as "Maynard's estimation method") is used. Therefore, it is necessary to quantitatively investigate how much the Maynard's estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard's estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard's estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard's estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles.
Evaluating lidar point densities for effective estimation of aboveground biomass
Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.
2016-01-01
The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.
Estimation of dislocations density and distribution of dislocations during ECAP-Conform process
NASA Astrophysics Data System (ADS)
Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza
2018-01-01
Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.
Yule, Daniel L.; Adams, Jean V.; Warner, David M.; Hrabik, Thomas R.; Kocovsky, Patrick M.; Weidel, Brian C.; Rudstam, Lars G.; Sullivan, Patrick J.
2013-01-01
Pelagic fish assessments often combine large amounts of acoustic-based fish density data and limited midwater trawl information to estimate species-specific biomass density. We compared the accuracy of five apportionment methods for estimating pelagic fish biomass density using simulated communities with known fish numbers that mimic Lakes Superior, Michigan, and Ontario, representing a range of fish community complexities. Across all apportionment methods, the error in the estimated biomass generally declined with increasing effort, but methods that accounted for community composition changes with water column depth performed best. Correlations between trawl catch and the true species composition were highest when more fish were caught, highlighting the benefits of targeted trawling in locations of high fish density. Pelagic fish surveys should incorporate geographic and water column depth stratification in the survey design, use apportionment methods that account for species-specific depth differences, target midwater trawling effort in areas of high fish density, and include at least 15 midwater trawls. With relatively basic biological information, simulations of fish communities and sampling programs can optimize effort allocation and reduce error in biomass estimates.
An analytical framework for estimating aquatic species density from environmental DNA
Chambert, Thierry; Pilliod, David S.; Goldberg, Caren S.; Doi, Hideyuki; Takahara, Teruhiko
2018-01-01
Environmental DNA (eDNA) analysis of water samples is on the brink of becoming a standard monitoring method for aquatic species. This method has improved detection rates over conventional survey methods and thus has demonstrated effectiveness for estimation of site occupancy and species distribution. The frontier of eDNA applications, however, is to infer species density. Building upon previous studies, we present and assess a modeling approach that aims at inferring animal density from eDNA. The modeling combines eDNA and animal count data from a subset of sites to estimate species density (and associated uncertainties) at other sites where only eDNA data are available. As a proof of concept, we first perform a cross-validation study using experimental data on carp in mesocosms. In these data, fish densities are known without error, which allows us to test the performance of the method with known data. We then evaluate the model using field data from a study on a stream salamander species to assess the potential of this method to work in natural settings, where density can never be known with absolute certainty. Two alternative distributions (Normal and Negative Binomial) to model variability in eDNA concentration data are assessed. Assessment based on the proof of concept data (carp) revealed that the Negative Binomial model provided much more accurate estimates than the model based on a Normal distribution, likely because eDNA data tend to be overdispersed. Greater imprecision was found when we applied the method to the field data, but the Negative Binomial model still provided useful density estimates. We call for further model development in this direction, as well as further research targeted at sampling design optimization. It will be important to assess these approaches on a broad range of study systems.
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions.
Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z; Gao, Xin
2017-01-01
Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.
A method to estimate statistical errors of properties derived from charge-density modelling
Lecomte, Claude
2018-01-01
Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964
Seasonal Variability in Global Eddy Diffusion and the Effect on Thermospheric Neutral Density
NASA Astrophysics Data System (ADS)
Pilinski, M.; Crowley, G.
2014-12-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time between January 2004 and January 2008 were estimated from residuals of neutral density measurements made by the CHallenging Minisatellite Payload (CHAMP) and simulations made using the Thermosphere Ionosphere Mesosphere Electrodynamics - Global Circulation Model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy-diffusivity models. The eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the RMS difference between the TIME-GCM model and density data from a variety of satellites is reduced by an average of 5%. This result, indicates that global thermospheric density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates how eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are some limitations of this method, which are discussed, including that the latitude-dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion consistent with diffusion observations made by other techniques.
Seasonal variability in global eddy diffusion and the effect on neutral density
NASA Astrophysics Data System (ADS)
Pilinski, M. D.; Crowley, G.
2015-04-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time were estimated from residuals of neutral density measurements made by the Challenging Minisatellite Payload (CHAMP) and simulations made using the thermosphere-ionosphere-mesosphere electrodynamics global circulation model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy diffusivity models. Eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the root-mean-square sum for the TIME-GCM model is reduced by an average of 5% when compared to density data from a variety of satellites, indicating that the fidelity of global density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates that eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are limitations to this method, which are discussed, including that the latitude dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion which is also consistent with diffusion observations made by other techniques.
Active learning for noisy oracle via density power divergence.
Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi
2013-10-01
The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gastounioti, Aimilia; Hsieh, Meng-Kang; Pantalone, Lauren; Conant, Emily F.; Kontos, Despina
2018-03-01
Mammographic density is an established risk factor for breast cancer. However, area-based density (ABD) measured in 2D mammograms consider the projection, rather than the actual volume of dense tissue which may be an important limitation. With the increasing utilization of digital breast tomosynthesis (DBT) in screening, there's an opportunity to routinely estimate volumetric breast density (VBD). In this study, we investigate associations between DBT-VBD and ABD extracted from standard-dose mammography (DM) and synthetic 2D digital mammography (sDM) increasingly replacing DM. We retrospectively analyzed bilateral imaging data from a random sample of 1000 women, acquired over a transitional period at our institution when all women had DBT, sDM and DM acquired as part of their routine breast screening. For each exam, ABD was measured in DM and sDM images with the publicly available "LIBRA" software, while DBT-VBD was measured using a previously validated, fully-automated computer algorithm. Spearman correlation (r) was used to compare VBD to ABD measurements. For each density measure, we also estimated the within woman intraclass correlation (ICC) and finally, to compare to clinical assessments, we performed analysis of variance (ANOVA) to evaluate the variation to the assigned clinical BI-RADS breast density category for each woman. DBT-VBD was moderately correlated to ABD from DM (r=0.70) and sDM (r=0.66). All density measures had strong bilateral symmetry (ICC = [0.85, 0.95]), but were significantly different across BI-RADS density categories (ANOVA, p<0.001). Our results contribute to further elaborating the clinical implications of breast density measures estimated with DBT which may better capture the volumetric amount of dense tissue within the breast than area-based measures and visual assessment.
NASA Astrophysics Data System (ADS)
Victor, Rodolfo A.; Prodanović, Maša.; Torres-Verdín, Carlos
2017-12-01
We develop a new Monte Carlo-based inversion method for estimating electron density and effective atomic number from 3-D dual-energy computed tomography (CT) core scans. The method accounts for uncertainties in X-ray attenuation coefficients resulting from the polychromatic nature of X-ray beam sources of medical and industrial scanners, in addition to delivering uncertainty estimates of inversion products. Estimation of electron density and effective atomic number from CT core scans enables direct deterministic or statistical correlations with salient rock properties for improved petrophysical evaluation; this condition is specifically important in media such as vuggy carbonates where CT resolution better captures core heterogeneity that dominates fluid flow properties. Verification tests of the inversion method performed on a set of highly heterogeneous carbonate cores yield very good agreement with in situ borehole measurements of density and photoelectric factor.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
A spatially explicit capture-recapture estimator for single-catch traps.
Distiller, Greg; Borchers, David L
2015-11-01
Single-catch traps are frequently used in live-trapping studies of small mammals. Thus far, a likelihood for single-catch traps has proven elusive and usually the likelihood for multicatch traps is used for spatially explicit capture-recapture (SECR) analyses of such data. Previous work found the multicatch likelihood to provide a robust estimator of average density. We build on a recently developed continuous-time model for SECR to derive a likelihood for single-catch traps. We use this to develop an estimator based on observed capture times and compare its performance by simulation to that of the multicatch estimator for various scenarios with nonconstant density surfaces. While the multicatch estimator is found to be a surprisingly robust estimator of average density, its performance deteriorates with high trap saturation and increasing density gradients. Moreover, it is found to be a poor estimator of the height of the detection function. By contrast, the single-catch estimators of density, distribution, and detection function parameters are found to be unbiased or nearly unbiased in all scenarios considered. This gain comes at the cost of higher variance. If there is no interest in interpreting the detection function parameters themselves, and if density is expected to be fairly constant over the survey region, then the multicatch estimator performs well with single-catch traps. However if accurate estimation of the detection function is of interest, or if density is expected to vary substantially in space, then there is merit in using the single-catch estimator when trap saturation is above about 60%. The estimator's performance is improved if care is taken to place traps so as to span the range of variables that affect animal distribution. As a single-catch likelihood with unknown capture times remains intractable for now, researchers using single-catch traps should aim to incorporate timing devices with their traps.
Population density estimated from locations of individuals on a passive detector array
Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.
2009-01-01
The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
The multicategory case of the sequential Bayesian pixel selection and estimation procedure
NASA Technical Reports Server (NTRS)
Pore, M. D.; Dennis, T. B. (Principal Investigator)
1980-01-01
A Bayesian technique for stratified proportion estimation and a sampling based on minimizing the mean squared error of this estimator were developed and tested on LANDSAT multispectral scanner data using the beta density function to model the prior distribution in the two-class case. An extention of this procedure to the k-class case is considered. A generalization of the beta function is shown to be a density function for the general case which allows the procedure to be extended.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Crowd density estimation based on convolutional neural networks with mixed pooling
NASA Astrophysics Data System (ADS)
Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming
2017-09-01
Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.
[Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].
Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong
2015-11-01
With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.
Fast clustering using adaptive density peak detection.
Wang, Xiao-Feng; Xu, Yifan
2017-12-01
Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.
Smart-Phone Based Magnetic Levitation for Measuring Densities
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform. PMID:26308615
Smart-Phone Based Magnetic Levitation for Measuring Densities.
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur; Ghiran, Ionita Calin; Tasoglu, Savas
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform.
Zhang, Hong; Zou, Sheng; Chen, Xiyuan; Ding, Ming; Shan, Guangcun; Hu, Zhaohui; Quan, Wei
2016-07-25
We present a method for monitoring the atomic density number on site based on atomic spin exchange relaxation. When the spin polarization P ≪ 1, the atomic density numbers could be estimated by measuring magnetic resonance linewidth in an applied DC magnetic field by using an all-optical atomic magnetometer. The density measurement results showed that the experimental results the theoretical predictions had a good consistency in the investigated temperature range from 413 K to 463 K, while, the experimental results were approximately 1.5 ∼ 2 times less than the theoretical predictions estimated from the saturated vapor pressure curve. These deviations were mainly induced by the radiative heat transfer efficiency, which inevitably leaded to a lower temperature in cell than the setting temperature.
Lee, K V; Moon, R D; Burkness, E C; Hutchison, W D; Spivak, M
2010-08-01
The parasitic mite Varroa destructor Anderson & Trueman (Acari: Varroidae) is arguably the most detrimental pest of the European-derived honey bee, Apis mellifera L. Unfortunately, beekeepers lack a standardized sampling plan to make informed treatment decisions. Based on data from 31 commercial apiaries, we developed sampling plans for use by beekeepers and researchers to estimate the density of mites in individual colonies or whole apiaries. Beekeepers can estimate a colony's mite density with chosen level of precision by dislodging mites from approximately to 300 adult bees taken from one brood box frame in the colony, and they can extrapolate to mite density on a colony's adults and pupae combined by doubling the number of mites on adults. For sampling whole apiaries, beekeepers can repeat the process in each of n = 8 colonies, regardless of apiary size. Researchers desiring greater precision can estimate mite density in an individual colony by examining three, 300-bee sample units. Extrapolation to density on adults and pupae may require independent estimates of numbers of adults, of pupae, and of their respective mite densities. Researchers can estimate apiary-level mite density by taking one 300-bee sample unit per colony, but should do so from a variable number of colonies, depending on apiary size. These practical sampling plans will allow beekeepers and researchers to quantify mite infestation levels and enhance understanding and management of V. destructor.
Poyneer, Lisa A; Bauman, Brian J
2015-03-31
Reference-free compensated imaging makes an estimation of the Fourier phase of a series of images of a target. The Fourier magnitude of the series of images is obtained by dividing the power spectral density of the series of images by an estimate of the power spectral density of atmospheric turbulence from a series of scene based wave front sensor (SBWFS) measurements of the target. A high-resolution image of the target is recovered from the Fourier phase and the Fourier magnitude.
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
Mathematical models for nonparametric inferences from line transect data
Burnham, K.P.; Anderson, D.R.
1976-01-01
A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).
A citizen science based survey method for estimating the density of urban carnivores.
Scott, Dawn M; Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W; Mill, Aileen C; Smith, Graham C; Tolhurst, Bryony A
2018-01-01
Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980's. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on species traits meeting particular criteria, and on resident responsiveness.
A citizen science based survey method for estimating the density of urban carnivores
Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.
2018-01-01
Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on species traits meeting particular criteria, and on resident responsiveness. PMID:29787598
Hierarchical models for estimating density from DNA mark-recapture studies
Gardner, B.; Royle, J. Andrew; Wegan, M.T.
2009-01-01
Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.
A hierarchical model for estimating density in camera-trap studies
Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.
2009-01-01
Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.
NASA Astrophysics Data System (ADS)
Fujita, Kazuhiko; Otomaru, Maki; Lopati, Paeniu; Hosono, Takashi; Kayanne, Hajime
2016-03-01
Carbonate production by large benthic foraminifers is sometimes comparable to that of corals and coralline algae, and contributes to sedimentation on reef islands and beaches in the tropical Pacific. Population dynamic data, such as population density and size structure (size-frequency distribution), are vital for an accurate estimation of shell production of foraminifers. However, previous production estimates in tropical environments were based on a limited sampling period with no consideration of seasonality. In addition, no comparisons were made of various estimation methods to determine more accurate estimates. Here we present the annual gross shell production rate of Baculogypsina sphaerulata, estimated based on population dynamics studied over a 2-yr period on an ocean reef flat of Funafuti Atoll (Tuvalu, tropical South Pacific). The population density of B. sphaerulata increased from January to March, when northwest winds predominated and the study site was on the leeward side of reef islands, compared to other seasons when southeast trade winds predominated and the study site was on the windward side. This result suggested that wind-driven flows controlled the population density at the study site. The B. sphaerulata population had a relatively stationary size-frequency distribution throughout the study period, indicating no definite intensive reproductive period in the tropical population. Four methods were applied to estimate the annual gross shell production rates of B. sphaerulata. The production rates estimated by three of the four methods (using monthly biomass, life tables and growth increment rates) were in the order of hundreds of g CaCO3 m-2 yr-1 or cm-3 m-2 yr-1, and the simple method using turnover rates overestimated the values. This study suggests that seasonal surveys should be undertaken of population density and size structure as these can produce more accurate estimates of shell productivity of large benthic foraminifers.
Hoffmann, Stefan A; Wohltat, Christian; Müller, Kristian M; Arndt, Katja M
2017-01-01
For various experimental applications, microbial cultures at defined, constant densities are highly advantageous over simple batch cultures. Due to high costs, however, devices for continuous culture at freely defined densities still experience limited use. We have developed a small-scale turbidostat for research purposes, which is manufactured from inexpensive components and 3D printed parts. A high degree of spatial system integration and a graphical user interface provide user-friendly operability. The used optical density feedback control allows for constant continuous culture at a wide range of densities and offers to vary culture volume and dilution rates without additional parametrization. Further, a recursive algorithm for on-line growth rate estimation has been implemented. The employed Kalman filtering approach based on a very general state model retains the flexibility of the used control type and can be easily adapted to other bioreactor designs. Within several minutes it can converge to robust, accurate growth rate estimates. This is particularly useful for directed evolution experiments or studies on metabolic challenges, as it allows direct monitoring of the population fitness.
The spatial distribution of fixed mutations within genes coding for proteins
NASA Technical Reports Server (NTRS)
Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.
1983-01-01
An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.
Estimating snow leopard population abundance using photography and capture-recapture techniques
Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.
2006-01-01
Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.
Spread of Epidemic on Complex Networks Under Voluntary Vaccination Mechanism
NASA Astrophysics Data System (ADS)
Xue, Shengjun; Ruan, Feng; Yin, Chuanyang; Zhang, Haifeng; Wang, Binghong
Under the assumption that the decision of vaccination is a voluntary behavior, in this paper, we use two forms of risk functions to characterize how susceptible individuals estimate the perceived risk of infection. One is uniform case, where each susceptible individual estimates the perceived risk of infection only based on the density of infection at each time step, so the risk function is only a function of the density of infection; another is preferential case, where each susceptible individual estimates the perceived risk of infection not only based on the density of infection but only related to its own activities/immediate neighbors (in network terminology, the activity or the number of immediate neighbors is the degree of node), so the risk function is a function of the density of infection and the degree of individuals. By investigating two different ways of estimating the risk of infection for susceptible individuals on complex network, we find that, for the preferential case, the spread of epidemic can be effectively controlled; yet, for the uniform case, voluntary vaccination mechanism is almost invalid in controlling the spread of epidemic on networks. Furthermore, given the temporality of some vaccines, the waves of epidemic for two cases are also different. Therefore, our work insight that the way of estimating the perceived risk of infection determines the decision on vaccination options, and then determines the success or failure of control strategy.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Boris Zeide
2004-01-01
Estimation of stand density is based on a relationship between number of trees and their average diameter in fully stocked stands. Popular measures of density (Reinekeâs stand density index and basal area) assume that number of trees decreases as a power function of diameter. Actually, number of trees drops faster than predicted by the power function because the number...
A pdf-Free Change Detection Test Based on Density Difference Estimation.
Bu, Li; Alippi, Cesare; Zhao, Dongbin
2018-02-01
The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.
Possibilities for Estimating Horizontal Electrical Currents in Active Regions on the Sun
NASA Astrophysics Data System (ADS)
Fursyak, Yu. A.; Abramenko, V. I.
2017-12-01
Part of the "free" magnetic energy associated with electrical current systems in the active region (AR) is released during solar flares. This proposition is widely accepted and it has stimulated interest in detecting electrical currents in active regions. The vertical component of an electric current in the photosphere can be found by observing the transverse magnetic field. At present, however, there are no direct methods for calculating transverse electric currents based on these observations. These calculations require information on the field vector measured simultaneously at several levels in the photosphere, which has not yet been done with solar instrumentation. In this paper we examine an approach to calculating the structure of the square of the density of a transverse electrical current based on a magnetogram of the vertical component of the magnetic field in the AR. Data obtained with the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory (SDO) for the AR of NOAA AR 11283 are used. It is shown that (1) the observed variations in the magnetic field of a sunspot and the proposed estimate of the density of an annular horizontal current around the spot are consistent with Faraday's law and (2) the resulting estimates of the magnitude of the square of the density of the horizontal current {j}_{\\perp}^2 = (0.002- 0.004) A2/m4 are consistent with previously obtained values of the density of a vertical current in the photosphere. Thus, the proposed estimate is physically significant and this method can be used to estimate the density and structure of transverse electrical currents in the photosphere.
Hearn, Andrew J; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T B; Macdonald, David W
2016-01-01
The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches.
Hearn, Andrew J.; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T. B.; Macdonald, David W.
2016-01-01
The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less
NASA Astrophysics Data System (ADS)
De Ridder, Maaike; De Haulleville, Thalès; Kearsley, Elizabeth; Van den Bulcke, Jan; Van Acker, Joris; Beeckman, Hans
2014-05-01
It is commonly acknowledged that allometric equations for aboveground biomass and carbon stock estimates are improved significantly if density is included as a variable. However, not much attention is given to this variable in terms of exact, measured values and density profiles from pith to bark. Most published case-studies obtain density values from literature sources or databases, this way using large ranges of density values and possible causing significant errors in carbon stock estimates. The use of one single fixed value for density is also not recommended if carbon stock increments are estimated. Therefore, our objective is to measure and analyze a large number of tree species occurring in two Biosphere Reserves (Luki and Yangambi). Nevertheless, the diversity of tree species in these tropical forests is too high to perform this kind of detailed analysis on all tree species (> 200/ha). Therefore, we focus on the most frequently encountered tree species with high abundance (trees/ha) and dominance (basal area/ha) for this study. Increment cores were scanned with a helical X-ray protocol to obtain density profiles from pith to bark. This way, we aim at dividing the tree species with a distinct type of density profile into separate groups. If, e.g., slopes in density values from pith to bark remain stable over larger samples of one tree species, this slope could also be used to correct for errors in carbon (increment) estimates, caused by density values from simplified density measurements or density values from literature. In summary, this is most likely the first study in the Congo Basin that focuses on density patterns in order to check their influence on carbon stocks and differences in carbon stocking based on species composition (density profiles ~ temperament of tree species).
Moran, M.S.; Jackson, R. D.; Raymond, L.H.; Gay, L.W.; Slater, P.N.
1989-01-01
Surface energy balance components were evaluated by combining satellite-based spectral data with on-site measurements of solar irradiance, air temperature, wind speed, and vapor pressure. Maps of latent heat flux density (??E) and net radiant flux density (Rn) were produced using Landsat Thematic Mapper (TM) data for three dates: 23 July 1985, 5 April 1986, and 24 June 1986. On each date, a Bowen-ratio apparatus, located in a vegetated field, was used to measure ??E and Rn at a point within the field. Estimates of ??E and Rn were also obtained using radiometers aboard an aircraft flown at 150 m above ground level. The TM-based estimates differed from the Bowen-ratio and aircraft-based estimates by less than 12 % over mature fields of cotton, wheat, and alfalfa, where ??E and Rn ranged from 400 to 700 Wm-2. ?? 1989.
Investigating uplift in the South-Western Barents Sea using sonic and density well log measurements
NASA Astrophysics Data System (ADS)
Yang, Y.; Ellis, M.
2014-12-01
Sediments in the Barents Sea have undergone large amounts of uplift due to Plio-Pleistoncene deglaciation as well as Palaeocene-Eocene Atlantic rifting. Uplift affects the reservoir quality, seal capacity and fluid migration. Therefore, it is important to gain reliable uplift estimates in order to evaluate the petroleum prospectivity properly. To this end, a number of quantification methods have been proposed, such as Apatite Fission Track Analysis (AFTA), and integration of seismic surveys with well log data. AFTA usually provides accurate uplift estimates, but the data is limited due to its high cost. While the seismic survey can provide good uplift estimate when well data is available for calibration, the uncertainty can be large in areas where there is little to no well data. We estimated South-Western Barents Sea uplift based on well data from the Norwegian Petroleum Directorate. Primary assumptions include time-irreversible shale compaction trends and a universal normal compaction trend for a specified formation. Sonic and density logs from two Cenozoic shale formation intervals, Kolmule and Kolje, were used for the study. For each formation, we studied logs of all released wells, and established exponential normal compaction trends based on a single well. That well was then deemed the reference well, and relative uplift can be calculated at other well locations based on the offset from the normal compaction trend. We found that the amount of uplift increases along the SW to NE direction, with a maximum difference of 1,447 m from the Kolje FM estimate, and 699 m from the Kolmule FM estimate. The average standard deviation of the estimated uplift is 130 m for the Kolje FM, and 160 m for the Kolmule FM using the density log. While results from density logs and sonic logs have good agreement in general, the density log provides slightly better results in terms of higher consistency and lower standard deviation. Our results agree with published papers qualitatively with some differences in the actual amount of uplifts. The results are considered to be more accurate due to the higher resolution of the log scale data that was used.
A new approach on seismic mortality estimations based on average population density
NASA Astrophysics Data System (ADS)
Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong
2016-12-01
This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.
A hierarchical model for spatial capture-recapture data
Royle, J. Andrew; Young, K.V.
2008-01-01
Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.
Computation of mass-density images from x-ray refraction-angle images.
Wernick, Miles N; Yang, Yongyi; Mondal, Indrasis; Chapman, Dean; Hasnah, Moumen; Parham, Christopher; Pisano, Etta; Zhong, Zhong
2006-04-07
In this paper, we investigate the possibility of computing quantitatively accurate images of mass density variations in soft tissue. This is a challenging task, because density variations in soft tissue, such as the breast, can be very subtle. Beginning from an image of refraction angle created by either diffraction-enhanced imaging (DEI) or multiple-image radiography (MIR), we estimate the mass-density image using a constrained least squares (CLS) method. The CLS algorithm yields accurate density estimates while effectively suppressing noise. Our method improves on an analytical method proposed by Hasnah et al (2005 Med. Phys. 32 549-52), which can produce significant artefacts when even a modest level of noise is present. We present a quantitative evaluation study to determine the accuracy with which mass density can be determined in the presence of noise. Based on computer simulations, we find that the mass-density estimation error can be as low as a few per cent for typical density variations found in the breast. Example images computed from less-noisy real data are also shown to illustrate the feasibility of the technique. We anticipate that density imaging may have application in assessment of water content of cartilage resulting from osteoarthritis, in evaluation of bone density, and in mammographic interpretation.
Mark-recapture using tetracycline and genetics reveal record-high bear density
Peacock, E.; Titus, K.; Garshelis, D.L.; Peacock, M.M.; Kuc, M.
2011-01-01
We used tetracycline biomarking, augmented with genetic methods to estimate the size of an American black bear (Ursus americanus) population on an island in Southeast Alaska. We marked 132 and 189 bears that consumed remote, tetracycline-laced baits in 2 different years, respectively, and observed 39 marks in 692 bone samples subsequently collected from hunters. We genetically analyzed hair samples from bait sites to determine the sex of marked bears, facilitating derivation of sex-specific population estimates. We obtained harvest samples from beyond the study area to correct for emigration. We estimated a density of 155 independent bears/100 km2, which is equivalent to the highest recorded for this species. This high density appears to be maintained by abundant, accessible natural food. Our population estimate (approx. 1,000 bears) could be used as a baseline and to set hunting quotas. The refined biomarking method for abundance estimation is a useful alternative where physical captures or DNA-based estimates are precluded by cost or logistics. Copyright ?? 2011 The Wildlife Society.
Multi-species genetic connectivity in a terrestrial habitat network.
Marrotte, Robby R; Bowman, Jeff; Brown, Michael G C; Cordes, Chad; Morris, Kimberley Y; Prentice, Melanie B; Wilson, Paul J
2017-01-01
Habitat fragmentation reduces genetic connectivity for multiple species, yet conservation efforts tend to rely heavily on single-species connectivity estimates to inform land-use planning. Such conservation activities may benefit from multi-species connectivity estimates, which provide a simple and practical means to mitigate the effects of habitat fragmentation for a larger number of species. To test the validity of a multi-species connectivity model, we used neutral microsatellite genetic datasets of Canada lynx ( Lynx canadensis ), American marten ( Martes americana ), fisher ( Pekania pennanti ), and southern flying squirrel ( Glaucomys volans ) to evaluate multi-species genetic connectivity across Ontario, Canada. We used linear models to compare node-based estimates of genetic connectivity for each species to point-based estimates of landscape connectivity (current density) derived from circuit theory. To our knowledge, we are the first to evaluate current density as a measure of genetic connectivity. Our results depended on landscape context: habitat amount was more important than current density in explaining multi-species genetic connectivity in the northern part of our study area, where habitat was abundant and fragmentation was low. In the south however, where fragmentation was prevalent, genetic connectivity was correlated with current density. Contrary to our expectations however, locations with a high probability of movement as reflected by high current density were negatively associated with gene flow. Subsequent analyses of circuit theory outputs showed that high current density was also associated with high effective resistance, underscoring that the presence of pinch points is not necessarily indicative of gene flow. Overall, our study appears to provide support for the hypothesis that landscape pattern is important when habitat amount is low. We also conclude that while current density is proportional to the probability of movement per unit area, this does not imply increased gene flow, since high current density tends to be a result of neighbouring pixels with high cost of movement (e.g., low habitat amount). In other words, pinch points with high current density appear to constrict gene flow.
A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.
Carreau, Julie; Bengio, Yoshua
2009-07-01
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Estimations of population density for selected periods between the Neolithic and AD 1800.
Zimmermann, Andreas; Hilpert, Johanna; Wendt, Karl Peter
2009-04-01
Abstract We describe a combination of methods applied to obtain reliable estimations of population density using archaeological data. The combination is based on a hierarchical model of scale levels. The necessary data and methods used to obtain the results are chosen so as to define transfer functions from one scale level to another. We apply our method to data sets from western Germany that cover early Neolithic, Iron Age, Roman, and Merovingian times as well as historical data from AD 1800. Error margins and natural and historical variability are discussed. Our results for nonstate societies are always lower than conventional estimations compiled from the literature, and we discuss the reasons for this finding. At the end, we compare the calculated local and global population densities with other estimations from different parts of the world.
Gately, Conor K; Hutyra, Lucy R; Wing, Ian Sue; Brondfield, Max N
2013-03-05
On-road transportation is responsible for 28% of all U.S. fossil-fuel CO2 emissions. Mapping vehicle emissions at regional scales is challenging due to data limitations. Existing emission inventories use spatial proxies such as population and road density to downscale national or state-level data. Such procedures introduce errors where the proxy variables and actual emissions are weakly correlated, and limit analysis of the relationship between emissions and demographic trends at local scales. We develop an on-road emission inventory product for Massachusetts-based on roadway-level traffic data obtained from the Highway Performance Monitoring System (HPMS). We provide annual estimates of on-road CO2 emissions at a 1 × 1 km grid scale for the years 1980 through 2008. We compared our results with on-road emissions estimates from the Emissions Database for Global Atmospheric Research (EDGAR), with the Vulcan Product, and with estimates derived from state fuel consumption statistics reported by the Federal Highway Administration (FHWA). Our model differs from FHWA estimates by less than 8.5% on average, and is within 5% of Vulcan estimates. We found that EDGAR estimates systematically exceed FHWA by an average of 22.8%. Panel regression analysis of per-mile CO2 emissions on population density at the town scale shows a statistically significant correlation that varies systematically in sign and magnitude as population density increases. Population density has a positive correlation with per-mile CO2 emissions for densities below 2000 persons km(-2), above which increasing density correlates negatively with per-mile emissions.
A comparison of minimum distance and maximum likelihood techniques for proportion estimation
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.
1982-01-01
The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.
Estimating population density and connectivity of American mink using spatial capture-recapture
Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.
2016-01-01
Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.
Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok
2016-01-01
Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50 mg HA per cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.
Estimating population density and connectivity of American mink using spatial capture-recapture.
Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P
2016-06-01
Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.
Mathematical models for non-parametric inferences from line transect data
Burnham, K.P.; Anderson, D.R.
1976-01-01
A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).
NASA Astrophysics Data System (ADS)
Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo
2017-03-01
Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.
The structure of the ISM in the Zone of Avoidance by high-resolution multi-wavelength observations
NASA Astrophysics Data System (ADS)
Tóth, L. V.; Doi, Y.; Pinter, S.; Kovács, T.; Zahorecz, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Onishi, T.
2018-05-01
We estimate the column density of the Galactic foreground interstellar medium (GFISM) in the direction of extragalactic sources. All-sky AKARI FIS infrared sky survey data might be used to trace the GFISM with a resolution of 2 arcminutes. The AKARI based GFISM hydrogen column density estimates are compared with similar quantities based on HI 21cm measurements of various resolution and of Planck results. High spatial resolution observations of the GFISM may be important recalculating the physical parameters of gamma-ray burst (GRB) host galaxies using the updated foreground parameters.
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Tree-centric mapping of forest carbon density from airborne laser scanning and hyperspectral data.
Dalponte, Michele; Coomes, David A
2016-10-01
Forests are a major component of the global carbon cycle, and accurate estimation of forest carbon stocks and fluxes is important in the context of anthropogenic global change. Airborne laser scanning (ALS) data sets are increasingly recognized as outstanding data sources for high-fidelity mapping of carbon stocks at regional scales.We develop a tree-centric approach to carbon mapping, based on identifying individual tree crowns (ITCs) and species from airborne remote sensing data, from which individual tree carbon stocks are calculated. We identify ITCs from the laser scanning point cloud using a region-growing algorithm and identifying species from airborne hyperspectral data by machine learning. For each detected tree, we predict stem diameter from its height and crown-width estimate. From that point on, we use well-established approaches developed for field-based inventories: above-ground biomasses of trees are estimated using published allometries and summed within plots to estimate carbon density.We show this approach is highly reliable: tests in the Italian Alps demonstrated a close relationship between field- and ALS-based estimates of carbon stocks ( r 2 = 0·98). Small trees are invisible from the air, and a correction factor is required to accommodate this effect.An advantage of the tree-centric approach over existing area-based methods is that it can produce maps at any scale and is fundamentally based on field-based inventory methods, making it intuitive and transparent. Airborne laser scanning, hyperspectral sensing and computational power are all advancing rapidly, making it increasingly feasible to use ITC approaches for effective mapping of forest carbon density also inside wider carbon mapping programs like REDD++.
Karanth, K.Ullas; Chundawat, Raghunandan S.; Nichols, James D.; Kumar, N. Samba
2004-01-01
Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture–recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture–recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, p̂= 0.04, resulted in an estimated tiger population size and standard error (N̂(SÊN̂)) of 29 (9.65), and a density (D̂(SÊD̂)) of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Smyth, E.; Small, E. E.
2017-12-01
The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.
Temporal monitoring of vessels activity using day/night band in Suomi NPP on South China Sea
NASA Astrophysics Data System (ADS)
Yamaguchi, Takashi; Asanuma, Ichio; Park, Jong Geol; Mackin, Kenneth J.; Mittleman, John
2017-05-01
In this research, we focus on vessel detection using the satellite imagery of day/night band (DNB) on Suomi NPP in order to monitor the change of vessel activity on the region of South China Sea. In this paper, we consider the relation between the temporal change of vessel activities and the events on maritime environment based on the vessel traffic density estimation using DNB. DNB is a moderate resolution (350-700m) satellite imagery but can detect the fishing light of fishery boats in night time for every day. The advantage of DNB is the continuous monitoring on wide area compared to another vessel detection and locating system. However, DNB gave strong influence of cloud and lunar refection. Therefore, we additionally used Brightness Temperature at 3.7μm(BT3.7) for cloud information. In our previous research, we construct an empirical vessel detection model that based on the DNB contrast and the estimation of cloud condition using BT3.7. Moreover, we proposed a vessel traffic density estimation method based on empirical model. In this paper, we construct the time temporal density estimation map on South China Sea and East China Sea in order to extract the knowledge from vessel activities change.
Evaluation of Statistical Methodologies Used in U. S. Army Ordnance and Explosive Work
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrouchov, G
2000-02-14
Oak Ridge National Laboratory was tasked by the U.S. Army Engineering and Support Center (Huntsville, AL) to evaluate the mathematical basis of existing software tools used to assist the Army with the characterization of sites potentially contaminated with unexploded ordnance (UXO). These software tools are collectively known as SiteStats/GridStats. The first purpose of the software is to guide sampling of underground anomalies to estimate a site's UXO density. The second purpose is to delineate areas of homogeneous UXO density that can be used in the formulation of response actions. It was found that SiteStats/GridStats does adequately guide the sampling somore » that the UXO density estimator for a sector is unbiased. However, the software's techniques for delineation of homogeneous areas perform less well than visual inspection, which is frequently used to override the software in the overall sectorization methodology. The main problems with the software lie in the criteria used to detect nonhomogeneity and those used to recommend the number of homogeneous subareas. SiteStats/GridStats is not a decision-making tool in the classical sense. Although it does provide information to decision makers, it does not require a decision based on that information. SiteStats/GridStats provides information that is supplemented by visual inspections, land-use plans, and risk estimates prior to making any decisions. Although the sector UXO density estimator is unbiased regardless of UXO density variation within a sector, its variability increases with increased sector density variation. For this reason, the current practice of visual inspection of individual sampled grid densities (as provided by Site-Stats/GridStats) is necessary to ensure approximate homogeneity, particularly at sites with medium to high UXO density. Together with Site-Stats/GridStats override capabilities, this provides a sufficient mechanism for homogeneous sectorization and thus yields representative UXO density estimates. Objections raised by various parties to the use of a numerical ''discriminator'' in SiteStats/GridStats were likely because of the fact that the concerned statistical technique is customarily applied for a different purpose and because of poor documentation. The ''discriminator'', in Site-Stats/GridStats is a ''tuning parameter'' for the sampling process, and it affects the precision of the grid density estimates through changes in required sample size. It is recommended that sector characterization in terms of a map showing contour lines of constant UXO density with an expressed uncertainty or confidence level is a better basis for remediation decisions than a sector UXO density point estimate. A number of spatial density estimation techniques could be adapted to the UXO density estimation problem.« less
NASA Astrophysics Data System (ADS)
Waters, Daniel F.; Cadou, Christopher P.
2014-02-01
A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (∼15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.
Assimilation of thermospheric measurements for ionosphere-thermosphere state estimation
NASA Astrophysics Data System (ADS)
Miladinovich, Daniel S.; Datta-Barua, Seebany; Bust, Gary S.; Makela, Jonathan J.
2016-12-01
We develop a method that uses data assimilation to estimate ionospheric-thermospheric (IT) states during midlatitude nighttime storm conditions. The algorithm Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE) uses time-varying electron densities in the F region, derived primarily from total electron content data, to estimate two drivers of the IT: neutral winds and electric potential. A Kalman filter is used to update background models based on ingested plasma densities and neutral wind measurements. This is the first time a Kalman filtering technique is used with the EMPIRE algorithm and the first time neutral wind measurements from 630.0 nm Fabry-Perot interferometers (FPIs) are ingested to improve estimates of storm time ion drifts and neutral winds. The effects of assimilating remotely sensed neutral winds from FPI observations are studied by comparing results of ingesting: electron densities (N) only, N plus half the measurements from a single FPI, and then N plus all of the FPI data. While estimates of ion drifts and neutral winds based on N give estimates similar to the background models, this study's results show that ingestion of the FPI data can significantly change neutral wind and ion drift estimation away from background models. In particular, once neutral winds are ingested, estimated neutral winds agree more with validation wind data, and estimated ion drifts in the magnetic field-parallel direction are more sensitive to ingestion than the field-perpendicular zonal and meridional directions. Also, data assimilation with FPI measurements helps provide insight into the effects of contamination on 630.0 nm emissions experienced during geomagnetic storms.
Model-based local density sharpening of cryo-EM maps
Jakobi, Arjen J; Wilmanns, Matthias
2017-01-01
Atomic models based on high-resolution density maps are the ultimate result of the cryo-EM structure determination process. Here, we introduce a general procedure for local sharpening of cryo-EM density maps based on prior knowledge of an atomic reference structure. The procedure optimizes contrast of cryo-EM densities by amplitude scaling against the radially averaged local falloff estimated from a windowed reference model. By testing the procedure using six cryo-EM structures of TRPV1, β-galactosidase, γ-secretase, ribosome-EF-Tu complex, 20S proteasome and RNA polymerase III, we illustrate how local sharpening can increase interpretability of density maps in particular in cases of resolution variation and facilitates model building and atomic model refinement. PMID:29058676
Porphyry copper deposit density
Singer, Donald A.; Berger, Vladimir; Menzie, W. David; Berger, Byron R.
2005-01-01
Estimating numbers of undiscovered mineral deposits has been a source of unease among economic geologists yet is a fundamental task in considering future supplies of resources. Estimates can be based on frequencies of deposits per unit of permissive area in control areas around the world in the same way that grade and tonnage frequencies are models of sizes and qualities of undiscovered deposits. To prevent biased estimates it is critical that, for a particular deposit type, these deposit density models be internally consistent with descriptive and grade and tonnage models of the same type. In this analysis only deposits and prospects that are likely to be included in future grade and tonnage models are employed, and deposits that have mineralization or alteration separated by less than an arbitrary but consistent distance—2 km for porphyry copper deposits—are combined into one deposit. Only 286 deposits and prospects that have more than half of the deposit not covered by postmineral rocks, sediments, or ice were counted.Nineteen control areas were selected and outlined along borders of hosting magmatic arc terranes based on three main features: (1) extensive exploration for porphyry copper deposits, (2) definable geologic settings of the porphyry copper deposits in island and continental volcanic-arc subduction-boundary zones, and (3) diversity of epochs of porphyry copper deposit formation.Porphyry copper deposit densities vary from 2 to 128 deposits per 100,000 km2 of exposed permissive rock, and the density histogram is skewed to high values. Ninety percent of the control areas have densities of four or more deposits, 50 percent have densities of 15 or more deposits, and 10 percent have densities of 35 or more deposits per 100,000 km2. Deposit density is not related to age or depth of emplacement. Porphyry copper deposit density is inversely related to the exposed area of permissive rock. The linear regression line and confidence limits constructed with the 19 control areas can be used to estimate the number of undiscovered deposits, given the size of a permissive area. In an example of the use of the equations, we estimate a 90 percent chance of at least four, a 50 percent chance of at least 11, and a 10 percent chance of at least 34 undiscovered porphyry copper deposits in the exposed parts of the Andean belt of Antarctica, which has no known deposits in a permissive area of about 76,000 km2. Measures of densities of deposits presented here allow rather simple yet robust estimation of the number of undiscovered porphyry copper deposits in exposed or covered permissive terranes.
D.J. Miller; K.M. Burnett
2007-01-01
We use regionally available digital elevation models and land-cover data, calibrated with ground- and photo-based landslide inventories, to produce spatially distributed estimates of shallow, translational landslide density (number/unit area) for the Oregon Coast Range. We resolve relationships between landslide density and forest cover. We account for topographic...
A mass-density model can account for the size-weight illusion.
Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut
2018-01-01
When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.
Statistical field estimators for multiscale simulations.
Eapen, Jacob; Li, Ju; Yip, Sidney
2005-11-01
We present a systematic approach for generating smooth and accurate fields from particle simulation data using the notions of statistical inference. As an extension to a parametric representation based on the maximum likelihood technique previously developed for velocity and temperature fields, a nonparametric estimator based on the principle of maximum entropy is proposed for particle density and stress fields. Both estimators are applied to represent molecular dynamics data on shear-driven flow in an enclosure which exhibits a high degree of nonlinear characteristics. We show that the present density estimator is a significant improvement over ad hoc bin averaging and is also free of systematic boundary artifacts that appear in the method of smoothing kernel estimates. Similarly, the velocity fields generated by the maximum likelihood estimator do not show any edge effects that can be erroneously interpreted as slip at the wall. For low Reynolds numbers, the velocity fields and streamlines generated by the present estimator are benchmarked against Newtonian continuum calculations. For shear velocities that are a significant fraction of the thermal speed, we observe a form of shear localization that is induced by the confining boundary.
Estimating effective data density in a satellite retrieval or an objective analysis
NASA Technical Reports Server (NTRS)
Purser, R. J.; Huang, H.-L.
1993-01-01
An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.
Sutherland, Chris; Royle, Andy
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
Estimating abundance: Chapter 27
Royle, J. Andrew
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
NASA Astrophysics Data System (ADS)
Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang
2017-10-01
Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.
Singh, Tulika; Sharma, Madhurima; Singla, Veenu; Khandelwal, Niranjan
2016-01-01
The objective of our study was to calculate mammographic breast density with a fully automated volumetric breast density measurement method and to compare it to breast imaging reporting and data system (BI-RADS) breast density categories assigned by two radiologists. A total of 476 full-field digital mammography examinations with standard mediolateral oblique and craniocaudal views were evaluated by two blinded radiologists and BI-RADS density categories were assigned. Using a fully automated software, mean fibroglandular tissue volume, mean breast volume, and mean volumetric breast density were calculated. Based on percentage volumetric breast density, a volumetric density grade was assigned from 1 to 4. The weighted overall kappa was 0.895 (almost perfect agreement) for the two radiologists' BI-RADS density estimates. A statistically significant difference was seen in mean volumetric breast density among the BI-RADS density categories. With increased BI-RADS density category, increase in mean volumetric breast density was also seen (P < 0.001). A significant positive correlation was found between BI-RADS categories and volumetric density grading by fully automated software (ρ = 0.728, P < 0.001 for first radiologist and ρ = 0.725, P < 0.001 for second radiologist). Pairwise estimates of the weighted kappa between Volpara density grade and BI-RADS density category by two observers showed fair agreement (κ = 0.398 and 0.388, respectively). In our study, a good correlation was seen between density grading using fully automated volumetric method and density grading using BI-RADS density categories assigned by the two radiologists. Thus, the fully automated volumetric method may be used to quantify breast density on routine mammography. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Automated mammographic breast density estimation using a fully convolutional network.
Lee, Juhun; Nishikawa, Robert M
2018-03-01
The purpose of this study was to develop a fully automated algorithm for mammographic breast density estimation using deep learning. Our algorithm used a fully convolutional network, which is a deep learning framework for image segmentation, to segment both the breast and the dense fibroglandular areas on mammographic images. Using the segmented breast and dense areas, our algorithm computed the breast percent density (PD), which is the faction of dense area in a breast. Our dataset included full-field digital screening mammograms of 604 women, which included 1208 mediolateral oblique (MLO) and 1208 craniocaudal (CC) views. We allocated 455, 58, and 91 of 604 women and their exams into training, testing, and validation datasets, respectively. We established ground truth for the breast and the dense fibroglandular areas via manual segmentation and segmentation using a simple thresholding based on BI-RADS density assessments by radiologists, respectively. Using the mammograms and ground truth, we fine-tuned a pretrained deep learning network to train the network to segment both the breast and the fibroglandular areas. Using the validation dataset, we evaluated the performance of the proposed algorithm against radiologists' BI-RADS density assessments. Specifically, we conducted a correlation analysis between a BI-RADS density assessment of a given breast and its corresponding PD estimate by the proposed algorithm. In addition, we evaluated our algorithm in terms of its ability to classify the BI-RADS density using PD estimates, and its ability to provide consistent PD estimates for the left and the right breast and the MLO and CC views of the same women. To show the effectiveness of our algorithm, we compared the performance of our algorithm against a state of the art algorithm, laboratory for individualized breast radiodensity assessment (LIBRA). The PD estimated by our algorithm correlated well with BI-RADS density ratings by radiologists. Pearson's rho values of our algorithm for CC view, MLO view, and CC-MLO-averaged were 0.81, 0.79, and 0.85, respectively, while those of LIBRA were 0.58, 0.71, and 0.69, respectively. For CC view and CC-MLO averaged cases, the difference in rho values between the proposed algorithm and LIBRA showed statistical significance (P < 0.006). In addition, our algorithm provided reliable PD estimates for the left and the right breast (Pearson's ρ > 0.87) and for the MLO and CC views (Pearson's ρ = 0.76). However, LIBRA showed a lower Pearson's rho value (0.66) for both the left and right breasts for the CC view. In addition, our algorithm showed an excellent ability to separate each sub BI-RADS breast density class (statistically significant, p-values = 0.0001 or less); only one comparison pair, density 1 and density 2 in the CC view, was not statistically significant (P = 0.54). However, LIBRA failed to separate breasts in density 1 and 2 for both the CC and MLO views (P > 0.64). We have developed a new deep learning based algorithm for breast density segmentation and estimation. We showed that the proposed algorithm correlated well with BI-RADS density assessments by radiologists and outperformed an existing state of the art algorithm. © 2018 American Association of Physicists in Medicine.
The fish community of a small impoundment in upstate New York
McCoy, C. Mead; Madenjian, Charles P.; Adams, Jean V.; Harman, Willard N.
2001-01-01
Moe Pond is a dimictic impoundment with surface area of 15.6 ha, a mean depth of 1.8 m, and an unexploited fish community of only two species: brown bullhead (Ameiurus nebulosus) and golden shiner (Notemigonus crysoleucas). The age-1 and older brown bullhead population was estimated to be 4,057 individuals, based on the Schnabel capture-recapture method of population estimation. Density and biomass were respectively estimated at 260 individuals/ha and 13 kg/ha. Annual survival rate of age-2 through age-5 brown bullheads was estimated at 48%. The golden shiner length-frequency distribution was unimodal with modal length of 80 mm and maximum total length of 115 m. The golden shiner population estimate was 7,154 individuals, based on seven beach seine haul replicate samples; the density and biomass were 686 shiners/ha and 5 kg/ha, respectively. This study provides an information baseline that may be useful in understanding food web interactions and whole-pond nutrient flux.
On soil textural classifications and soil-texture-based estimations
NASA Astrophysics Data System (ADS)
Ángel Martín, Miguel; Pachepsky, Yakov A.; García-Gutiérrez, Carlos; Reyes, Miguel
2018-02-01
The soil texture representation with the standard textural fraction triplet sand-silt-clay
is commonly used to estimate soil properties. The objective of this work was to test the hypothesis that other fraction sizes in the triplets may provide a better representation of soil texture for estimating some soil parameters. We estimated the cumulative particle size distribution and bulk density from an entropy-based representation of the textural triplet with experimental data for 6240 soil samples. The results supported the hypothesis. For example, simulated distributions were not significantly different from the original ones in 25 and 85 % of cases when the sand-silt-clay and very coarse+coarse + medium sand - fine + very fine sand - silt+clay
were used, respectively. When the same standard and modified triplets were used to estimate the average bulk density, the coefficients of determination were 0.001 and 0.967, respectively. Overall, the textural triplet selection appears to be application and data specific.
Estimating cosmic velocity fields from density fields and tidal tensors
NASA Astrophysics Data System (ADS)
Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan
2012-10-01
In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Emission measures derived from far ultraviolet spectra of T Tauri stars
NASA Astrophysics Data System (ADS)
Cram, L. E.; Giampapa, M. S.; Imhoff, C. L.
1980-06-01
Spectroscopic diagnostics based on UV emission line observations have been developed to study the solar chromosphere, transition region, and corona. The atmospheric properties that can be inferred from observations of total line intensities include the temperature, by identifying the ionic species present; the temperature distribution of the emission measure, from the absolute intensities; and the electron density of the source, from line intensity ratios sensitive to the electron density. In the present paper, the temperature distribution of the emission measure is estimated from observations of far UV emission line fluxes of the T Tauri stars, RW Aurigae and RU Lupi, made on the IUE. A crude estimate of the electron density of one star is obtained, using density-sensitive line ratios.
Capture-recapture of white-tailed deer using DNA from fecal pellet-groups
Goode, Matthew J; Beaver, Jared T; Muller, Lisa I; Clark, Joseph D.; van Manen, Frank T.; Harper, Craig T; Basinger, P Seth
2014-01-01
Traditional methods for estimating white-tailed deer population size and density are affected by behavioral biases, poor detection in densely forested areas, and invalid techniques for estimating effective trapping area. We evaluated a noninvasive method of capture—recapture for white-tailed deer (Odocoileus virginianus) density estimation using DNA extracted from fecal pellets as an individual marker and for gender determination, coupled with a spatial detection function to estimate density (spatially explicit capture—recapture, SECR). We collected pellet groups from 11 to 22 January 2010 at randomly selected sites within a 1-km2 area located on Arnold Air Force Base in Coffee and Franklin counties, Tennessee. We searched 703 10-m radius plots and collected 352 pellet-group samples from 197 plots over five two-day sampling intervals. Using only the freshest pellets we recorded 140 captures of 33 different animals (15M:18F). Male and female densities were 1.9 (SE = 0.8) and 3.8 (SE = 1.3) deer km-2, or a total density of 5.8 deer km-2 (14.9 deer mile-2). Population size was 20.8 (SE = 7.6) over a 360-ha area, and sex ratio was 1.0 M: 2.0 F (SE = 0.71). We found DNA sampling from pellet groups improved deer abundance, density and sex ratio estimates in contiguous landscapes which could be used to track responses to harvest or other management actions.
A simple method for estimating the size of nuclei on fractal surfaces
NASA Astrophysics Data System (ADS)
Zeng, Qiang
2017-10-01
Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.
Estimation of distributed Fermat-point location for wireless sensor networking.
Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien
2011-01-01
This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.
Sara A. Goeking; Greg C. Liknes; Erik Lindblom; John Chase; Dennis M. Jacobs; Robert. Benton
2012-01-01
Recent changes to the Forest Inventory and Analysis (FIA) Program's definition of forest land precipitated the development of a geographic information system (GIS)-based tool for efficiently estimating tree canopy cover for all FIA plots. The FIA definition of forest land has shifted from a density-related criterion based on stocking to a 10 percent tree canopy...
Forsum, Elisabet; Henriksson, Pontus; Löf, Marie
2014-01-01
A possibility to assess body composition during pregnancy is often important. Estimating body density (DB) and use the two-component model (2CM) to calculate total body fat (TBF) represents an option. However, this approach has been insufficiently evaluated during pregnancy. We evaluated the 2CM, and estimated fat-free mass (FFM) density and variability in 17 healthy women before pregnancy, in gestational weeks 14 and 32, and 2 weeks postpartum based on DB (underwater weighing), total body water (deuterium dilution) and body weight, assessed on these four occasions. TBF, calculated using the 2CM and published FFM density (TBF2CM), was compared to reference estimates obtained using the three-component model (TBF3CM). TBF2CM minus TBF3CM (mean ± 2SD) was −1.63 ± 5.67 (p = 0.031), −1.39 ± 7.75 (p = 0.16), −0.38 ± 4.44 (p = 0.49) and −1.39 ± 5.22 (p = 0.043) % before pregnancy, in gestational weeks 14 and 32 and 2 weeks postpartum, respectively. The effect of pregnancy on the variability of FFM density was larger in gestational week 14 than in gestational week 32. The 2CM, based on DB and published FFM density, assessed body composition as accurately in gestational week 32 as in non-pregnant adults. Corresponding values in gestational week 14 were slightly less accurate than those obtained before pregnancy. PMID:25526240
DS — Software for analyzing data collected using double sampling
Bart, Jonathan; Hartley, Dana
2011-01-01
DS analyzes count data to estimate density or relative density and population size when appropriate. The software is available at http://iwcbm.dev4.fsr.com/IWCBM/default.asp?PageID=126. The software was designed to analyze data collected using double sampling, but it also can be used to analyze index data. DS is not currently configured to apply distance methods or methods based on capture-recapture theory. Double sampling for the purpose of this report means surveying a sample of locations with a rapid method of unknown accuracy and surveying a subset of these locations using a more intensive method assumed to yield unbiased estimates. "Detection ratios" are calculated as the ratio of results from rapid surveys on intensive plots to the number actually present as determined from the intensive surveys. The detection ratios are used to adjust results from the rapid surveys. The formula for density is (results from rapid survey)/(estimated detection ratio from intensive surveys). Population sizes are estimated as (density)(area). Double sampling is well-established in the survey sampling literature—see Cochran (1977) for the basic theory, Smith (1995) for applications of double sampling in waterfowl surveys, Bart and Earnst (2002, 2005) for discussions of its use in wildlife studies, and Bart and others (in press) for a detailed account of how the method was used to survey shorebirds across the arctic region of North America. Indices are surveys that do not involve complete counts of well-defined plots or recording information to estimate detection rates (Thompson and others, 1998). In most cases, such data should not be used to estimate density or population size but, under some circumstances, may be used to compare two densities or estimate how density changes through time or across space (Williams and others, 2005). The Breeding Bird Survey (Sauer and others, 2008) provides a good example of an index survey. Surveyors record all birds detected but do not record any information, such as distance or whether each bird is recorded in subperiods, that could be used to estimate detection rates. Nonetheless, the data are widely used to estimate temporal trends and spatial patterns in abundance (Sauer and others, 2008). DS produces estimates of density (or relative density for indices) by species and stratum. Strata are usually defined using region and habitat but other variables may be used, and the entire study area may be classified as a single stratum. Population size in each stratum and for the entire study area also is estimated for each species. For indices, the estimated totals generally are only useful if (a) plots are surveyed so that densities can be calculated and extrapolated to the entire study area and (b) if the detection rates are close to 1.0. All estimates are accompanied by standard errors (SE) and coefficients of variation (CV, that is, SE/estimate).
Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A; Roubidoux, Marilyn A; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M; Samala, Ravi K
2018-01-09
Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79 ± 0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.
NASA Astrophysics Data System (ADS)
Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M.; Samala, Ravi K.
2018-01-01
Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input ‘for processing’ DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice’s coefficient (DC) of 0.79 ± 0.13 and Pearson’s correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.
Challenges of DNA-based mark-recapture studies of American black bears
Settlage, K.E.; Van Manen, F.T.; Clark, J.D.; King, T.L.
2008-01-01
We explored whether genetic sampling would be feasible to provide a region-wide population estimate for American black bears (Ursus americanus) in the southern Appalachians, USA. Specifically, we determined whether adequate capture probabilities (p >0.20) and population estimates with a low coefficient of variation (CV <20%) could be achieved given typical agency budget and personnel constraints. We extracted DNA from hair collected from baited barbed-wire enclosures sampled over a 10-week period on 2 study areas: a high-density black bear population in a portion of Great Smoky Mountains National Park and a lower density population on National Forest lands in North Carolina, South Carolina, and Georgia. We identified individual bears by their unique genotypes obtained from 9 microsatellite loci. We sampled 129 and 60 different bears in the National Park and National Forest study areas, respectively, and applied closed mark–recapture models to estimate population abundance. Capture probabilities and precision of the population estimates were acceptable only for sampling scenarios for which we pooled weekly sampling periods. We detected capture heterogeneity biases, probably because of inadequate spatial coverage by the hair-trapping grid. The logistical challenges of establishing and checking a sufficiently high density of hair traps make DNA-based estimates of black bears impractical for the southern Appalachian region. Alternatives are to estimate population size for smaller areas, estimate population growth rates or survival using mark–recapture methods, or use independent marking and recapturing techniques to reduce capture heterogeneity.
Jiang, Shenghang; Park, Seongjin; Challapalli, Sai Divya; Fei, Jingyi; Wang, Yong
2017-01-01
We report a robust nonparametric descriptor, J′(r), for quantifying the density of clustering molecules in single-molecule localization microscopy. J′(r), based on nearest neighbor distribution functions, does not require any parameter as an input for analyzing point patterns. We show that J′(r) displays a valley shape in the presence of clusters of molecules, and the characteristics of the valley reliably report the clustering features in the data. Most importantly, the position of the J′(r) valley (rJm′) depends exclusively on the density of clustering molecules (ρc). Therefore, it is ideal for direct estimation of the clustering density of molecules in single-molecule localization microscopy. As an example, this descriptor was applied to estimate the clustering density of ptsG mRNA in E. coli bacteria. PMID:28636661
NASA Astrophysics Data System (ADS)
Guo, Xinwei; Qu, Zexing; Gao, Jiali
2018-01-01
The multi-state density functional theory (MSDFT) provides a convenient way to estimate electronic coupling of charge transfer processes based on a diabatic representation. Its performance has been benchmarked against the HAB11 database with a mean unsigned error (MUE) of 17 meV between MSDFT and ab initio methods. The small difference may be attributed to different representations, diabatic from MSDFT and adiabatic from ab initio calculations. In this discussion, we conclude that MSDFT provides a general and efficient way to estimate the electronic coupling for charge-transfer rate calculations based on the Marcus-Hush model.
Ellison, Aaron M.; Jackson, Scott
2015-01-01
Herpetologists and conservation biologists frequently use convenient and cost-effective, but less accurate, abundance indices (e.g., number of individuals collected under artificial cover boards or during natural objects surveys) in lieu of more accurate, but costly and destructive, population size estimators to detect and monitor size, state, and trends of amphibian populations. Although there are advantages and disadvantages to each approach, reliable use of abundance indices requires that they be calibrated with accurate population estimators. Such calibrations, however, are rare. The red back salamander, Plethodon cinereus, is an ecologically useful indicator species of forest dynamics, and accurate calibration of indices of salamander abundance could increase the reliability of abundance indices used in monitoring programs. We calibrated abundance indices derived from surveys of P. cinereus under artificial cover boards or natural objects with a more accurate estimator of their population size in a New England forest. Average densities/m2 and capture probabilities of P. cinereus under natural objects or cover boards in independent, replicate sites at the Harvard Forest (Petersham, Massachusetts, USA) were similar in stands dominated by Tsuga canadensis (eastern hemlock) and deciduous hardwood species (predominantly Quercus rubra [red oak] and Acer rubrum [red maple]). The abundance index based on salamanders surveyed under natural objects was significantly associated with density estimates of P. cinereus derived from depletion (removal) surveys, but underestimated true density by 50%. In contrast, the abundance index based on cover-board surveys overestimated true density by a factor of 8 and the association between the cover-board index and the density estimates was not statistically significant. We conclude that when calibrated and used appropriately, some abundance indices may provide cost-effective and reliable measures of P. cinereus abundance that could be used in conservation assessments and long-term monitoring at Harvard Forest and other northeastern USA forests. PMID:26020008
Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A
2013-09-01
New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. © 2012 The Royal Entomological Society.
Atmospheric turbulence profiling with unknown power spectral density
NASA Astrophysics Data System (ADS)
Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny
2018-04-01
Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.
NASA Astrophysics Data System (ADS)
Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.
Estimating black bear density using DNA data from hair snares
Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.
2010-01-01
DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.
Opit, George P; Perret, Jamis; Holt, Kiffnie; Nechols, James R; Margolies, David C; Williams, Kimberly A
2009-02-01
Efficacy, costs, and impact on crop salability of various biological and chemical control strategies for Tetranychus urticae Koch (Acari: Tetranychidae) were evaluated on mixed plantings of impatiens, Impatiens wallerana Hook.f (Ericales: Balsaminaceae), and ivy geranium, Pelargonium peltatum (1.) L'Hér. Ex Aiton (Geraniales: Geraniaceae), cultivars in commercial greenhouses. Chemical control consisting of the miticide bifenazate (Floramite) was compared with two biological control strategies using the predatory mite Phytoseiulus persimilis Athias-Henriot (Acari: Phytoseiidae). Treatments were 1) a single, early application of bifenazate; 2) a single, early release of predatory mites at a 1:4 predator:pest ratio based on leaf samples to estimate pest density; 3) a weekly release of predatory mites at numbers based on the area covered by the crop; and 4) an untreated control. T. urticae populations were monitored for 3 wk after the earliest treatment. When plants were ready for market, their salability was estimated. Bifenazate and density-based P. persimilis treatments effectively reduced T. urticae numbers starting 1 wk after plants had been treated, whereas the scheduled, area-based P. persimilis treatment had little or no effect. The percentage of flats that could be sold at the highest market wholesale price ranged from 15 to 33%, 44 to 86%, 84 to 95%, and 92 to 100%, in the control, weekly area-based P. persimilis, bifenazate, and single density-based P. persimilis treatments, respectively. We have shown that in commercial greenhouse production of herbaceous ornamental bedding plants, estimating pest density to determine the appropriate number of predators to release is as effective and offers nearly the same economic benefit as prophylactic use of pesticides.
Longitudinal Differences of Ionospheric Vertical Density Distribution and Equatorial Electrodynamics
NASA Technical Reports Server (NTRS)
Yizengaw, E.; Zesta, E.; Moldwin, M. B.; Damtie, B.; Mebrahtu, A.; Valledares, C.E.; Pfaff, R. F.
2012-01-01
Accurate estimation of global vertical distribution of ionospheric and plasmaspheric density as a function of local time, season, and magnetic activity is required to improve the operation of space-based navigation and communication systems. The vertical density distribution, especially at low and equatorial latitudes, is governed by the equatorial electrodynamics that produces a vertical driving force. The vertical structure of the equatorial density distribution can be observed by using tomographic reconstruction techniques on ground-based global positioning system (GPS) total electron content (TEC). Similarly, the vertical drift, which is one of the driving mechanisms that govern equatorial electrodynamics and strongly affect the structure and dynamics of the ionosphere in the low/midlatitude region, can be estimated using ground magnetometer observations. We present tomographically reconstructed density distribution and the corresponding vertical drifts at two different longitudes: the East African and west South American sectors. Chains of GPS stations in the east African and west South American longitudinal sectors, covering the equatorial anomaly region of meridian approx. 37 deg and 290 deg E, respectively, are used to reconstruct the vertical density distribution. Similarly, magnetometer sites of African Meridian B-field Education and Research (AMBER) and INTERMAGNET for the east African sector and South American Meridional B-field Array (SAMBA) and Low Latitude Ionospheric Sensor Network (LISN) are used to estimate the vertical drift velocity at two distinct longitudes. The comparison between the reconstructed and Jicamarca Incoherent Scatter Radar (ISR) measured density profiles shows excellent agreement, demonstrating the usefulness of tomographic reconstruction technique in providing the vertical density distribution at different longitudes. Similarly, the comparison between magnetometer estimated vertical drift and other independent drift observation, such as from VEFI onboard Communication/Navigation Outage Forecasting System (C/NOFS) satellite and JULIA radar, is equally promising. The observations at different longitudes suggest that the vertical drift velocities and the vertical density distribution have significant longitudinal differences; especially the equatorial anomaly peaks expand to higher latitudes more in American sector than the African sector, indicating that the vertical drift in the American sector is stronger than the African sector.
Modeling global mangrove soil carbon stocks: filling the gaps in coastal environments
NASA Astrophysics Data System (ADS)
Rovai, A.; Twilley, R.
2017-12-01
We provide an overview of contemporaneous global mangrove soil organic carbon (SOC) estimates, focusing on a framework to explain disproportionate differences among observed data as a way to improve global estimates. This framework is based on a former conceptual model, the coastal environmental setting, in contrast to the more popular latitude-based hypotheses largely believed to explain hemispheric variation in mangrove ecosystem properties. To demonstrate how local and regional estimates of SOC linked to coastal environmental settings can render more realistic global mangrove SOC extrapolations we combined published and unpublished data, yielding a total of 106 studies, reporting on 552 sites from 43 countries. These sites were classified into distinct coastal environmental setting types according to two concurrent worldwide typology of nearshore coastal systems classifications. Mangrove SOC density varied substantially across coastal environmental settings, ranging from 14.9 ± 0.8 in river dominated (deltaic) soils to 53.9 ± 1.6 mg cm-3 (mean ± SE) in karstic coastlines. Our findings reveal striking differences between published values and contemporary global mangrove SOC extrapolation based on country-level mean reference values, particularly for karstic-dominated coastlines where mangrove SOC stocks have been underestimated by up to 50%. Correspondingly, climate-based global estimates predicted lower mangrove SOC density values (32-41 mg C cm-3) for mangroves in karstic environments, differing from published (21-126 mg C cm-3) and unpublished (47-58 mg C cm-3) values. Moreover, climate-based projections yielded higher SOC density values (27-70 mg C cm-3) for river-dominated mangroves compared to lower ranges reported in the literature (11-24 mg C cm-3). We argue that this inconsistent reporting of SOC stock estimates between river-dominated and karstic coastal environmental settings is likely due to the omission of geomorphological and geophysical environmental drivers, which control C storage in coastal wetlands. We encourage the science community more close utilize coastal environmental settings and new inventories of geomorphological typologies to build more robust estimates of local and regional estimates of SOC that can be extrapolated to global C estimates.
BLAKE - A Thermodynamics Code Based on TIGER: Users’ Guide and Manual
1982-07-01
hydrazine UDMH Water H20 3S I The formulas and enthalpies of formation of these ingredients are listed in Appendix B. Wherever possible they were taken...estimates for B(T), it does not improve the estimates of C(I). Numerically the contribution of the third coefficient is necessary- for loading densities ...here, C will be assumed independent of temperature. The vejume, V, is related to the reference mass, Mo, and the density , p, by P = M /V (9) 0 The
Gupta, Manan; Joshi, Amitabh; Vidya, T N C
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species.
Joshi, Amitabh; Vidya, T. N. C.
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species. PMID:28306735
Regional model-based computerized ionospheric tomography using GPS measurements: IONOLAB-CIT
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Orhan; Arikan, Feza
2015-10-01
Three-dimensional imaging of the electron density distribution in the ionosphere is a crucial task for investigating the ionospheric effects. Dual-frequency Global Positioning System (GPS) satellite signals can be used to estimate the slant total electron content (STEC) along the propagation path between a GPS satellite and ground-based receiver station. However, the estimated GPS-STEC is very sparse and highly nonuniformly distributed for obtaining reliable 3-D electron density distributions derived from the measurements alone. Standard tomographic reconstruction techniques are not accurate or reliable enough to represent the full complexity of variable ionosphere. On the other hand, model-based electron density distributions are produced according to the general trends of ionosphere, and these distributions do not agree with measurements, especially for geomagnetically active hours. In this study, a regional 3-D electron density distribution reconstruction method, namely, IONOLAB-CIT, is proposed to assimilate GPS-STEC into physical ionospheric models. The proposed method is based on an iterative optimization framework that tracks the deviations from the ionospheric model in terms of F2 layer critical frequency and maximum ionization height resulting from the comparison of International Reference Ionosphere extended to Plasmasphere (IRI-Plas) model-generated STEC and GPS-STEC. The suggested tomography algorithm is applied successfully for the reconstruction of electron density profiles over Turkey, during quiet and disturbed hours of ionosphere using Turkish National Permanent GPS Network.
NASA Astrophysics Data System (ADS)
Minkwitz, David; van den Boogaart, Karl Gerald; Gerzen, Tatjana; Hoque, Mainul; Hernández-Pajares, Manuel
2016-11-01
The estimation of the ionospheric electron density by kriging is based on the optimization of a parametric measurement covariance model. First, the extension of kriging with slant total electron content (STEC) measurements based on a spatial covariance to kriging with a spatial-temporal covariance model, assimilating STEC data of a sliding window, is presented. Secondly, a novel tomography approach by gradient-enhanced kriging (GEK) is developed. Beyond the ingestion of STEC measurements, GEK assimilates ionosonde characteristics, providing peak electron density measurements as well as gradient information. Both approaches deploy the 3-D electron density model NeQuick as a priori information and estimate the covariance parameter vector within a maximum likelihood estimation for the dedicated tomography time stamp. The methods are validated in the European region for two periods covering quiet and active ionospheric conditions. The kriging with spatial and spatial-temporal covariance model is analysed regarding its capability to reproduce STEC, differential STEC and foF2. Therefore, the estimates are compared to the NeQuick model results, the 2-D TEC maps of the International GNSS Service and the DLR's Ionospheric Monitoring and Prediction Center, and in the case of foF2 to two independent ionosonde stations. Moreover, simulated STEC and ionosonde measurements are used to investigate the electron density profiles estimated by the GEK in comparison to a kriging with STEC only. The results indicate a crucial improvement in the initial guess by the developed methods and point out the potential compensation for a bias in the peak height hmF2 by means of GEK.
Radiomic modeling of BI-RADS density categories
NASA Astrophysics Data System (ADS)
Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Hadjiiski, Lubomir
2017-03-01
Screening mammography is the most effective and low-cost method to date for early cancer detection. Mammographic breast density has been shown to be highly correlated with breast cancer risk. We are developing a radiomic model for BI-RADS density categorization on digital mammography (FFDM) with a supervised machine learning approach. With IRB approval, we retrospectively collected 478 FFDMs from 478 women. As a gold standard, breast density was assessed by an MQSA radiologist based on BI-RADS categories. The raw FFDMs were used for computerized density assessment. The raw FFDM first underwent log-transform to approximate the x-ray sensitometric response, followed by multiscale processing to enhance the fibroglandular densities and parenchymal patterns. Three ROIs were automatically identified based on the keypoint distribution, where the keypoints were obtained as the extrema in the image Gaussian scale-space. A total of 73 features, including intensity and texture features that describe the density and the parenchymal pattern, were extracted from each breast. Our BI-RADS density estimator was constructed by using a random forest classifier. We used a 10-fold cross validation resampling approach to estimate the errors. With the random forest classifier, computerized density categories for 412 of the 478 cases agree with radiologist's assessment (weighted kappa = 0.93). The machine learning method with radiomic features as predictors demonstrated a high accuracy in classifying FFDMs into BI-RADS density categories. Further work is underway to improve our system performance as well as to perform an independent testing using a large unseen FFDM set.
Determination of CME 3D parameters based on a new full ice-cream cone model
NASA Astrophysics Data System (ADS)
Na, Hyeonock; Moon, Yong-Jae
2017-08-01
In space weather forecast, it is important to determine three-dimensional properties of CMEs. Using 29 limb CMEs, we examine which cone type is close to a CME three-dimensional structure. We find that most CMEs have near full ice-cream cone structure which is a symmetrical circular cone combined with a hemisphere. We develop a full ice-cream cone model based on a new methodology that the full ice-cream cone consists of many flat cones with different heights and angular widths. By applying this model to 12 SOHO/LASCO halo CMEs, we find that 3D parameters from our method are similar to those from other stereoscopic methods (i.e., a triangulation method and a Graduated Cylindrical Shell model). In addition, we derive CME mean density (ρmean=Mtotal/Vcone) based on the full ice-cream cone structure. For several limb events, we determine CME mass by applying the Solarsoft procedure (e.g., cme_mass.pro) to SOHO/LASCO C3 images. CME volumes are estimated from the full ice-cream cone structure. From the power-law relationship between CME mean density and its height, we estimate CME mean densities at 20 solar radii (Rs). We will compare the CME densities at 20 Rs with their corresponding ICME densities.
Is there a single best estimator? selection of home range estimators using area- under- the-curve
Walter, W. David; Onorato, Dave P.; Fischer, Justin W.
2015-01-01
Comparisons of fit of home range contours with locations collected would suggest that use of VHF technology is not as accurate as GPS technology to estimate size of home range for large mammals. Estimators of home range collected with GPS technology performed better than those estimated with VHF technology regardless of estimator used. Furthermore, estimators that incorporate a temporal component (third-generation estimators) appeared to be the most reliable regardless of whether kernel-based or Brownian bridge-based algorithms were used and in comparison to first- and second-generation estimators. We defined third-generation estimators of home range as any estimator that incorporates time, space, animal-specific parameters, and habitat. Such estimators would include movement-based kernel density, Brownian bridge movement models, and dynamic Brownian bridge movement models among others that have yet to be evaluated.
Khorozyan, Igor G; Malkhasyan, Alexander G; Abramov, Alexei V
2008-12-01
It is important to predict how many individuals of a predator species can survive in a given area on the basis of prey sufficiency and to compare predictive estimates with actual numbers to understand whether or not key threats are related to prey availability. Rugged terrain and low detection probabilities do not allow for the use of traditional prey count techniques in mountain areas. We used presence-absence occupancy modeling and camera-trapping to estimate the abundance and densities of prey species and regression analysis to predict leopard (Panthera pardus) densities from estimated prey biomass in the mountains of the Nuvadi area, Meghri Ridge, southern Armenia. The prey densities were 12.94 ± 2.18 individuals km(-2) for the bezoar goat (Capra aegagrus), 6.88 ± 1.56 for the wild boar (Sus scrofa) and 0.44 ± 0.20 for the roe deer (Capreolus capreolus). The detection probability of the prey was a strong function of the activity patterns, and was highest in diurnal bezoar goats (0.59 ± 0.09). Based on robust regression, the estimated total ungulate prey biomass (720.37 ± 142.72 kg km(-2) ) can support a leopard density of 7. 18 ± 3.06 individuals 100 km(-2) . The actual leopard density is only 0.34 individuals 100 km(-2) (i.e. one subadult male recorded over the 296.9 km(2) ), estimated from tracking and camera-trapping. The most plausible explanation for this discrepancy between predicted and actual leopard density is that poaching and disturbance caused by livestock breeding, plant gathering, deforestation and human-induced wild fires are affecting the leopard population in Armenia. © 2008 ISZS, Blackwell Publishing and IOZ/CAS.
A mass-density model can account for the size-weight illusion
Bergmann Tiest, Wouter M.; Drewing, Knut
2018-01-01
When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception. PMID:29447183
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, Paul B.
Paralleling our recent computationally intensive (quasi-Monte Carlo) work for the case N=4 (e-print quant-ph/0308037), we undertake the task for N=6 of computing to high numerical accuracy, the formulas of Sommers and Zyczkowski (e-print quant-ph/0304041) for the (N{sup 2}-1)-dimensional volume and (N{sup 2}-2)-dimensional hyperarea of the (separable and nonseparable) NxN density matrices, based on the Bures (minimal monotone) metric--and also their analogous formulas (e-print quant-ph/0302197) for the (nonmonotone) flat Hilbert-Schmidt metric. With the same seven 10{sup 9} well-distributed ('low-discrepancy') sample points, we estimate the unknown volumes and hyperareas based on five additional (monotone) metrics of interest, including the Kubo-Mori and Wigner-Yanase.more » Further, we estimate all of these seven volume and seven hyperarea (unknown) quantities when restricted to the separable density matrices. The ratios of separable volumes (hyperareas) to separable plus nonseparable volumes (hyperareas) yield estimates of the separability probabilities of generically rank-6 (rank-5) density matrices. The (rank-6) separability probabilities obtained based on the 35-dimensional volumes appear to be--independently of the metric (each of the seven inducing Haar measure) employed--twice as large as those (rank-5 ones) based on the 34-dimensional hyperareas. (An additional estimate--33.9982--of the ratio of the rank-6 Hilbert-Schmidt separability probability to the rank-4 one is quite clearly close to integral too.) The doubling relationship also appears to hold for the N=4 case for the Hilbert-Schmidt metric, but not the others. We fit simple exact formulas to our estimates of the Hilbert-Schmidt separable volumes and hyperareas in both the N=4 and N=6 cases.« less
Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús
2014-01-01
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102
Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús
2014-04-11
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).
Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface
NASA Astrophysics Data System (ADS)
Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai
To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.
Scent Lure Effect on Camera-Trap Based Leopard Density Estimates
Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke
2016-01-01
Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816
Mapping Tree Density at the Global Scale
NASA Astrophysics Data System (ADS)
Covey, K. R.; Crowther, T. W.; Glick, H.; Bettigole, C.; Bradford, M.
2015-12-01
The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global-scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical regions, with 0.74, and 0.61 trillion in boreal and temperate regions, respectively. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming impact of humans across most of the world. Based on our projected tree densities, we estimate that deforestation is currently responsible for removing over 15 billion trees each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.
Mapping tree density at a global scale
NASA Astrophysics Data System (ADS)
Crowther, T. W.; Glick, H. B.; Covey, K. R.; Bettigole, C.; Maynard, D. S.; Thomas, S. M.; Smith, J. R.; Hintler, G.; Duguid, M. C.; Amatulli, G.; Tuanmu, M.-N.; Jetz, W.; Salas, C.; Stam, C.; Piotto, D.; Tavani, R.; Green, S.; Bruce, G.; Williams, S. J.; Wiser, S. K.; Huber, M. O.; Hengeveld, G. M.; Nabuurs, G.-J.; Tikhonova, E.; Borchardt, P.; Li, C.-F.; Powrie, L. W.; Fischer, M.; Hemp, A.; Homeier, J.; Cho, P.; Vibrans, A. C.; Umunay, P. M.; Piao, S. L.; Rowe, C. W.; Ashton, M. S.; Crane, P. R.; Bradford, M. A.
2015-09-01
The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical forests, with 0.74 trillion in boreal regions and 0.61 trillion in temperate regions. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming effect of humans across most of the world. Based on our projected tree densities, we estimate that over 15 billion trees are cut down each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.
Mapping tree density at a global scale.
Crowther, T W; Glick, H B; Covey, K R; Bettigole, C; Maynard, D S; Thomas, S M; Smith, J R; Hintler, G; Duguid, M C; Amatulli, G; Tuanmu, M-N; Jetz, W; Salas, C; Stam, C; Piotto, D; Tavani, R; Green, S; Bruce, G; Williams, S J; Wiser, S K; Huber, M O; Hengeveld, G M; Nabuurs, G-J; Tikhonova, E; Borchardt, P; Li, C-F; Powrie, L W; Fischer, M; Hemp, A; Homeier, J; Cho, P; Vibrans, A C; Umunay, P M; Piao, S L; Rowe, C W; Ashton, M S; Crane, P R; Bradford, M A
2015-09-10
The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical forests, with 0.74 trillion in boreal regions and 0.61 trillion in temperate regions. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming effect of humans across most of the world. Based on our projected tree densities, we estimate that over 15 billion trees are cut down each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.
NASA Technical Reports Server (NTRS)
Leitold, Veronika; Keller, Michael; Morton, Douglas C.; Cook, Bruce D.; Shimabukuro, Yosio E.
2015-01-01
Background: Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas with complex topography present a challenge for lidar remote sensing. Results: We compared digital terrain models (DTM) derived from airborne lidar data from a mountainous region of the Atlantic Forest in Brazil to 35 ground control points measured with survey grade GNSS receivers. The terrain model generated from full-density (approx. 20 returns/sq m) data was highly accurate (mean signed error of 0.19 +/-0.97 m), while those derived from reduced-density datasets (8/sq m, 4/sq m, 2/sq m and 1/sq m) were increasingly less accurate. Canopy heights calculated from reduced-density lidar data declined as data density decreased due to the inability to accurately model the terrain surface. For lidar return densities below 4/sq m, the bias in height estimates translated into errors of 80-125 Mg/ha in predicted aboveground biomass. Conclusions: Given the growing emphasis on the use of airborne lidar for forest management, carbon monitoring, and conservation efforts, the results of this study highlight the importance of careful survey planning and consistent sampling for accurate quantification of aboveground biomass stocks and dynamics. Approaches that rely primarily on canopy height to estimate aboveground biomass are sensitive to DTM errors from variability in lidar sampling density.
Leitold, Veronika; Keller, Michael; Morton, Douglas C; Cook, Bruce D; Shimabukuro, Yosio E
2015-12-01
Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas with complex topography present a challenge for lidar remote sensing. We compared digital terrain models (DTM) derived from airborne lidar data from a mountainous region of the Atlantic Forest in Brazil to 35 ground control points measured with survey grade GNSS receivers. The terrain model generated from full-density (~20 returns m -2 ) data was highly accurate (mean signed error of 0.19 ± 0.97 m), while those derived from reduced-density datasets (8 m -2 , 4 m -2 , 2 m -2 and 1 m -2 ) were increasingly less accurate. Canopy heights calculated from reduced-density lidar data declined as data density decreased due to the inability to accurately model the terrain surface. For lidar return densities below 4 m -2 , the bias in height estimates translated into errors of 80-125 Mg ha -1 in predicted aboveground biomass. Given the growing emphasis on the use of airborne lidar for forest management, carbon monitoring, and conservation efforts, the results of this study highlight the importance of careful survey planning and consistent sampling for accurate quantification of aboveground biomass stocks and dynamics. Approaches that rely primarily on canopy height to estimate aboveground biomass are sensitive to DTM errors from variability in lidar sampling density.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
NASA Astrophysics Data System (ADS)
Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong
2018-01-01
Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.
Density estimation in a wolverine population using spatial capture-recapture models
Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin
2011-01-01
Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.
Analytical minimization of synchronicity errors in stochastic identification
NASA Astrophysics Data System (ADS)
Bernal, D.
2018-01-01
An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.
Mullin, Keith D; McDonald, Trent; Wells, Randall S; Balmer, Brian C; Speakman, Todd; Sinclair, Carrie; Zolman, Eric S; Hornsby, Fawn; McBride, Shauna M; Wilkinson, Krystan A; Schwacke, Lori H
2017-01-01
After the Deepwater Horizon (DWH) oil spill began in April 2010, studies were initiated on northern Gulf of Mexico common bottlenose dolphins (Tursiops truncatus) in Mississippi Sound (MSS) to determine density, abundance, and survival, during and after the oil spill, and to compare these results to previous research in this region. Seasonal boat-based photo-identification surveys (2010-2012) were conducted in a section of MSS to estimate dolphin density and survival, and satellite-linked telemetry (2013) was used to determine ranging patterns. Telemetry suggested two different ranging patterns in MSS: (1) inshore waters with seasonal movements into mid-MSS, and (2) around the barrier islands exclusively. Based upon these data, dolphin density was estimated in two strata (Inshore and Island) using a spatially-explicit robust-design capture-recapture model. Inshore and Island density varied between 0.77-1.61 dolphins km-2 ([Formula: see text] = 1.42, 95% CI: 1.28-1.53) and 3.32-5.74 dolphins km-2 ([Formula: see text] = 4.43, 95% CI: 2.70-5.63), respectively. The estimated annual survival rate for dolphins with distinctive fins was very low in the year following the spill, 0.73 (95% CI: 0.67-0.78), and consistent with the occurrence of a large scale cetacean unusual mortality event that was in part attributed to the DWH oil spill. Fluctuations in density were not as large or seasonally consistent as previously reported. Total abundance for MSS extrapolated from density results ranged from 4,610 in July 2011 to 3,046 in January 2012 ([Formula: see text] = 3,469, 95% CI: 3,113-3,725).
Mullin, Keith D.; Wells, Randall S.; Balmer, Brian C.; Speakman, Todd; Sinclair, Carrie; Zolman, Eric S.; Hornsby, Fawn; McBride, Shauna M.; Wilkinson, Krystan A.; Schwacke, Lori H.
2017-01-01
After the Deepwater Horizon (DWH) oil spill began in April 2010, studies were initiated on northern Gulf of Mexico common bottlenose dolphins (Tursiops truncatus) in Mississippi Sound (MSS) to determine density, abundance, and survival, during and after the oil spill, and to compare these results to previous research in this region. Seasonal boat-based photo-identification surveys (2010–2012) were conducted in a section of MSS to estimate dolphin density and survival, and satellite-linked telemetry (2013) was used to determine ranging patterns. Telemetry suggested two different ranging patterns in MSS: (1) inshore waters with seasonal movements into mid-MSS, and (2) around the barrier islands exclusively. Based upon these data, dolphin density was estimated in two strata (Inshore and Island) using a spatially-explicit robust-design capture-recapture model. Inshore and Island density varied between 0.77–1.61 dolphins km−2 (x¯ = 1.42, 95% CI: 1.28–1.53) and 3.32–5.74 dolphins km−2 (x¯ = 4.43, 95% CI: 2.70–5.63), respectively. The estimated annual survival rate for dolphins with distinctive fins was very low in the year following the spill, 0.73 (95% CI: 0.67–0.78), and consistent with the occurrence of a large scale cetacean unusual mortality event that was in part attributed to the DWH oil spill. Fluctuations in density were not as large or seasonally consistent as previously reported. Total abundance for MSS extrapolated from density results ranged from 4,610 in July 2011 to 3,046 in January 2012 (x¯ = 3,469, 95% CI: 3,113–3,725). PMID:29053728
NASA Astrophysics Data System (ADS)
Webb, S. I.; Tudge, J.; Tobin, H. J.
2013-12-01
Integrated Ocean Drilling Program (IODP) Expedition 338, the most recently completed drilling stage of the NanTroSEIZE project, targeted the Miocene inner accretionary prism off the coast of southwest Japan. NanTroSEIZE is a multi-stage project in which the main objective is to characterize, sample, and instrument the potentially seismogenic region of the Nankai Trough, an active subduction zone. Understanding the physical properties of the inner accretionary prism will aid in the characterization of the deformation that has taken place and the evolution of stress, fluid pressure, and strain over the deformational history of these sediments and rocks. This study focuses on the estimation of porosity and density from available logs to inform solid and fluid volume estimates at Site C0002 from the sea floor through the Kumano Basin into the accretionary prism. Gamma ray, resistivity, and sonic logs were acquired at Hole C0002F, to a total depth of 2005 mbsf into the inner accretionary prism. Because a density and neutron porosity tool could not be deployed, porosity and density must be estimated using a variety of largely empirical methods. In this study, we calculate estimated porosity and density from both the electrical resistivity and sonic (P-wave velocity) logs collected in Hole C0002F. However, the relationship of these physical properties to the available logs is not straightforward and can be affected by changes in fluid type, salinity, temperature, presence of fractures, and clay mineralogy. To evaluate and calibrate the relationships among these properties, we take advantage of the more extensive suite of LWD data recorded in Hole C0002A at the same drill site, including density and neutron porosity measurements. Data collected in both boreholes overlaps in the interval from 875 - 1400 mbsf in the lower Kumano Basin and across the basin-accretionary wedge boundary. Core-based physical properties are also available across this interval. Through comparison of density and porosity values in intervals where core and LWD data overlap, we calculate porosity and density values and evaluate their uncertainties, developing a best estimate given the specific lithology and pore fluid at this tectonic setting. We then propagate this calibrated estimate to the deeper portions of C0002F where core and LWD density and porosity measurements are unavailable, using the sonic and resistivity data alone.
Henshall, John M; Dierens, Leanne; Sellars, Melony J
2014-09-02
While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.
NASA Astrophysics Data System (ADS)
D'Isanto, A.; Polsterer, K. L.
2018-01-01
Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.
Wallace, Dorothy; Prosper, Olivia; Savos, Jacob; Dunham, Ann M; Chipman, Jonathan W; Shi, Xun; Ndenga, Bryson; Githeko, Andrew
2017-03-01
A dynamical model of Anopheles gambiae larval and adult populations is constructed that matches temperature-dependent maturation times and mortality measured experimentally as well as larval instar and adult mosquito emergence data from field studies in the Kenya Highlands. Spectral classification of high-resolution satellite imagery is used to estimate household density. Indoor resting densities collected over a period of one year combined with predictions of the dynamical model give estimates of both aquatic habitat and total adult mosquito densities. Temperature and precipitation patterns are derived from monthly records. Precipitation patterns are compared with average and extreme habitat estimates to estimate available aquatic habitat in an annual cycle. These estimates are coupled with the original model to produce estimates of adult and larval populations dependent on changing aquatic carrying capacity for larvae and changing maturation and mortality dependent on temperature. This paper offers a general method for estimating the total area of aquatic habitat in a given region, based on larval counts, emergence rates, indoor resting density data, and number of households.Altering the average daily temperature and the average daily rainfall simulates the effect of climate change on annual cycles of prevalence of An. gambiae adults. We show that small increases in average annual temperature have a large impact on adult mosquito density, whether measured at model equilibrium values for a single square meter of habitat or tracked over the course of a year of varying habitat availability and temperature. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Upadhyay, Bhanu B.; Jha, Jaya; Takhar, Kuldeep; Ganguly, Swaroop; Saha, Dipankar
2018-05-01
We have observed that the estimation of two-dimensional electron gas density is dependent on the device geometry. The geometric contribution leads to the anomalous estimation of the GaN based heterostructure properties. The observed discrepancy is found to originate from the anomalous area dependent capacitance of GaN based Schottky diodes, which is an integral part of the high electron mobility transistors. The areal capacitance density is found to increase for smaller radii Schottky diodes, contrary to a constant as expected intuitively. The capacitance is found to follow a second order polynomial on the radius of all the bias voltages and frequencies considered here. In addition to the quadratic dependency corresponding to the areal component, the linear dependency indicates a peripheral component. It is further observed that the peripheral to areal contribution is inversely proportional to the radius confirming the periphery as the location of the additional capacitance. The peripheral component is found to be frequency dependent and tends to saturate to a lower value for measurements at a high frequency. In addition, the peripheral component is found to vanish when the surface is passivated by a combination of N2 and O2 plasma treatments. The cumulative surface state density per unit length of the perimeter of the Schottky diodes as obtained by the integrated response over the distance between the ohmic and Schottky contacts is found to be 2.75 × 1010 cm-1.
Double sampling to estimate density and population trends in birds
Bart, Jonathan; Earnst, Susan L.
2002-01-01
We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.
Lipfert, Frederick W; Wyzga, Ronald E; Baty, Jack D; Miller, J Philip
2009-04-01
For this paper, we considered relationships between mortality, vehicular traffic density, and ambient levels of 12 hazardous air pollutants, elemental carbon (EC), oxides of nitrogen (NOx), sulfur dioxide (SO2), and sulfate (SO4(2-)). These pollutant species were selected as markers for specific types of emission sources, including vehicular traffic, coal combustion, smelters, and metal-working industries. Pollutant exposures were estimated using emissions inventories and atmospheric dispersion models. We analyzed associations between county ambient levels of these pollutants and survival patterns among approximately 70,000 U.S. male veterans by mortality period (1976-2001 and subsets), type of exposure model, and traffic density level. We found significant associations between all-cause mortality and traffic-related air quality indicators and with traffic density per se, with stronger associations for benzene, formaldehyde, diesel particulate, NOx, and EC. The maximum effect on mortality for all cohort subjects during the 26-yr follow-up period is approximately 10%, but most of the pollution-related deaths in this cohort occurred in the higher-traffic counties, where excess risks approach 20%. However, mortality associations with diesel particulates are similar in high- and low-traffic counties. Sensitivity analyses show risks decreasing slightly over time and minor differences between linear and logarithmic exposure models. Two-pollutant models show stronger risks associated with specific traffic-related pollutants than with traffic density per se, although traffic density retains statistical significance in most cases. We conclude that tailpipe emissions of both gases and particles are among the most significant and robust predictors of mortality in this cohort and that most of those associations have weakened over time. However, we have not evaluated possible contributions from road dust or traffic noise. Stratification by traffic density level suggests the presence of response thresholds, especially for gaseous pollutants. Because of their wider distributions of estimated exposures, risk estimates based on emissions and atmospheric dispersion models tend to be more precise than those based on local ambient measurements.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
Evaluation of methods to estimate lake herring spawner abundance in Lake Superior
Yule, D.L.; Stockwell, J.D.; Cholwek, G.A.; Evrard, L.M.; Schram, S.; Seider, M.; Symbal, M.
2006-01-01
Historically, commercial fishers harvested Lake Superior lake herring Coregonus artedi for their flesh, but recently operators have targeted lake herring for roe. Because no surveys have estimated spawning female abundance, direct estimates of fishing mortality are lacking. The primary objective of this study was to determine the feasibility of using acoustic techniques in combination with midwater trawling to estimate spawning female lake herring densities in a Lake Superior statistical grid (i.e., a 10′ latitude × 10′ longitude area over which annual commercial harvest statistics are compiled). Midwater trawling showed that mature female lake herring were largely pelagic during the night in late November, accounting for 94.5% of all fish caught exceeding 250 mm total length. When calculating acoustic estimates of mature female lake herring, we excluded backscattering from smaller pelagic fishes like immature lake herring and rainbow smelt Osmerus mordax by applying an empirically derived threshold of −35.6 dB. We estimated the average density of mature females in statistical grid 1409 at 13.3 fish/ha and the total number of spawning females at 227,600 (95% confidence interval = 172,500–282,700). Using information on mature female densities, size structure, and fecundity, we estimate that females deposited 3.027 billion (109) eggs in grid 1409 (95% confidence interval = 2.356–3.778 billion). The relative estimation error of the mature female density estimate derived using a geostatistical model—based approach was low (12.3%), suggesting that the employed method was robust. Fishing mortality rates of all mature females and their eggs were estimated at 2.3% and 3.8%, respectively. The techniques described for enumerating spawning female lake herring could be used to develop a more accurate stock–recruitment model for Lake Superior lake herring.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
Density Estimation for New Solid and Liquid Explosives
1977-02-17
The group additivity approach was shown to be applicable to density estimation. The densities of approximately 180 explosives and related compounds... of very diverse compositions were estimated, and almost all the estimates were quite reasonable. Of the 168 compounds for which direct comparisons...could be made (see Table 6), 36.9% of the estimated densities were within 1% of the measured densities, 33.3% were within 1-2%, 11.9% were within 2-3
Accounting for unsearched areas in estimating wind turbine-caused fatality
Huso, Manuela M.P.; Dalthorp, Dan
2014-01-01
With wind energy production expanding rapidly, concerns about turbine-induced bird and bat fatality have grown and the demand for accurate estimation of fatality is increasing. Estimation typically involves counting carcasses observed below turbines and adjusting counts by estimated detection probabilities. Three primary sources of imperfect detection are 1) carcasses fall into unsearched areas, 2) carcasses are removed or destroyed before sampling, and 3) carcasses present in the searched area are missed by observers. Search plots large enough to comprise 100% of turbine-induced fatality are expensive to search and may nonetheless contain areas unsearchable because of dangerous terrain or impenetrable brush. We evaluated models relating carcass density to distance from the turbine to estimate the proportion of carcasses expected to fall in searched areas and evaluated the statistical cost of restricting searches to areas near turbines where carcass density is highest and search conditions optimal. We compared 5 estimators differing in assumptions about the relationship of carcass density to distance from the turbine. We tested them on 6 different carcass dispersion scenarios at each of 3 sites under 2 different search regimes. We found that even simple distance-based carcass-density models were more effective at reducing bias than was a 5-fold expansion of the search area. Estimators incorporating fitted rather than assumed models were least biased, even under restricted searches. Accurate estimates of fatality at wind-power facilities will allow critical comparisons of rates among turbines, sites, and regions and contribute to our understanding of the potential environmental impact of this technology.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Studies on the ionospheric-thermospheric coupling mechanisms using SLR
NASA Astrophysics Data System (ADS)
Panzetta, Francesca; Erdogan, Eren; Bloßfeld, Mathis; Schmidt, Michael
2016-04-01
Several Low Earth Orbiters (LEOs) have been used by different research groups to model the thermospheric neutral density distribution at various altitudes performing Precise Orbit Determination (POD) in combination with satellite accelerometry. This approach is, in principle, based on satellite drag analysis, driven by the fact that the drag force is one of the major perturbing forces acting on LEOs. The satellite drag itself is physically related to the thermospheric density. The present contribution investigates the possibility to compute the thermospheric density from Satellite Laser Ranging (SLR) observations. SLR is commonly used to compute very accurate satellite orbits. As a prerequisite, a very high precise modelling of gravitational and non-gravitational accelerations is necessary. For this investigation, a sensitivity study of SLR observations to thermospheric density variations is performed using the DGFI Orbit and Geodetic parameter estimation Software (DOGS). SLR data from satellites at altitudes lower than 500 km are processed adopting different thermospheric models. The drag coefficients which describe the interaction of the satellite surfaces with the atmosphere are analytically computed in order to obtain scaling factors purely related to the thermospheric density. The results are reported and discussed in terms of estimates of scaling coefficients of the thermospheric density. Besides, further extensions and improvements in thermospheric density modelling obtained by combining a physics-based approach with ionospheric observations are investigated. For this purpose, the coupling mechanisms between the thermosphere and ionosphere are studied.
Cove, Michael V.; Gardner, Beth; Simons, Theodore R.; Kays, Roland; O'Connell, Allan F.
2017-01-01
Feral and free-ranging domestic cats (Felis catus) can have strong negative effects on small mammals and birds, particularly in island ecosystems. We deployed camera traps to study free-ranging cats in national wildlife refuges and state parks on Big Pine Key and Key Largo in the Florida Keys, USA, and used spatial capture–recapture models to estimate cat abundance, movement, and activities. We also used stable isotope analyses to examine the diet of cats captured on public lands. Top population models separated cats based on differences in movement and detection with three and two latent groups on Big Pine Key and Key Largo, respectively. We hypothesize that these latent groups represent feral, semi-feral, and indoor/outdoor house cats based on the estimated movement parameters of each group. Estimated cat densities and activity varied between the two islands, with relatively high densities (~4 cats/km2) exhibiting crepuscular diel patterns on Big Pine Key and lower densities (~1 cat/km2) exhibiting nocturnal diel patterns on Key Largo. These differences are most likely related to the higher proportion of house cats on Big Pine relative to Key Largo. Carbon and nitrogen isotope ratios from hair samples of free-ranging cats (n = 43) provided estimates of the proportion of wild and anthropogenic foods in cat diets. At the population level, cats on both islands consumed mostly anthropogenic foods (>80% of the diet), but eight individuals were effective predators of wildlife (>50% of the diet). We provide evidence that cat groups within a population move different distances, exhibit different activity patterns, and that individuals consume wildlife at different rates, which all have implications for managing this invasive predator.
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
Racial Differences in Quantitative Measures of Area and Volumetric Breast Density
McCarthy, Anne Marie; Keller, Brad M.; Pantalone, Lauren M.; Hsieh, Meng-Kang; Synnestvedt, Marie; Conant, Emily F.; Armstrong, Katrina; Kontos, Despina
2016-01-01
Abstract Background: Increased breast density is a strong risk factor for breast cancer and also decreases the sensitivity of mammographic screening. The purpose of our study was to compare breast density for black and white women using quantitative measures. Methods: Breast density was assessed among 5282 black and 4216 white women screened using digital mammography. Breast Imaging-Reporting and Data System (BI-RADS) density was obtained from radiologists’ reports. Quantitative measures for dense area, area percent density (PD), dense volume, and volume percent density were estimated using validated, automated software. Breast density was categorized as dense or nondense based on BI-RADS categories or based on values above and below the median for quantitative measures. Logistic regression was used to estimate the odds of having dense breasts by race, adjusted for age, body mass index (BMI), age at menarche, menopause status, family history of breast or ovarian cancer, parity and age at first birth, and current hormone replacement therapy (HRT) use. All statistical tests were two-sided. Results: There was a statistically significant interaction of race and BMI on breast density. After accounting for age, BMI, and breast cancer risk factors, black women had statistically significantly greater odds of high breast density across all quantitative measures (eg, PD nonobese odds ratio [OR] = 1.18, 95% confidence interval [CI] = 1.02 to 1.37, P = .03, PD obese OR = 1.26, 95% CI = 1.04 to 1.53, P = .02). There was no statistically significant difference in BI-RADS density by race. Conclusions: After accounting for age, BMI, and other risk factors, black women had higher breast density than white women across all quantitative measures previously associated with breast cancer risk. These results may have implications for risk assessment and screening. PMID:27130893
Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods
Zhaodi Guo; Jingyun Fang; Yude Pan; Richard Birdsey
2010-01-01
Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Estimation of vegetation cover at subpixel resolution using LANDSAT data
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1986-01-01
The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.
A new data assimilation engine for physics-based thermospheric density models
NASA Astrophysics Data System (ADS)
Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.
2017-12-01
The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo
2014-07-01
A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.
Demura, S; Sato, S; Kitabayashi, T
2006-06-01
This study examined a method of predicting body density based on hydrostatic weighing without head submersion (HWwithoutHS). Donnelly and Sintek (1984) developed a method to predict body density based on hydrostatic weight without head submersion. This method predicts the difference (D) between HWwithoutHS and hydrostatic weight with head submersion (HWwithHS) from anthropometric variables (head length and head width), and then calculates body density using D as a correction factor. We developed several prediction equations to estimate D based on head anthropometry and differences between the sexes, and compared their prediction accuracy with Donnelly and Sintek's equation. Thirty-two males and 32 females aged 17-26 years participated in the study. Multiple linear regression analysis was performed to obtain the prediction equations, and the systematic errors of their predictions were assessed by Bland-Altman plots. The best prediction equations obtained were: Males: D(g) = -164.12X1 - 125.81X2 - 111.03X3 + 100.66X4 + 6488.63, where X1 = head length (cm), X2 = head circumference (cm), X3 = head breadth (cm), X4 = head thickness (cm) (R = 0.858, R2 = 0.737, adjusted R2 = 0.687, standard error of the estimate = 224.1); Females: D(g) = -156.03X1 - 14.03X2 - 38.45X3 - 8.87X4 + 7852.45, where X1 = head circumference (cm), X2 = body mass (g), X3 = head length (cm), X4 = height (cm) (R = 0.913, R2 = 0.833, adjusted R2 = 0.808, standard error of the estimate = 137.7). The effective predictors in these prediction equations differed from those of Donnelly and Sintek's equation, and head circumference and head length were included in both equations. The prediction accuracy was improved by statistically selecting effective predictors. Since we did not assess cross-validity, the equations cannot be used to generalize to other populations, and further investigation is required.
NASA Astrophysics Data System (ADS)
Blossfeld, M.; Schmidt, M.; Erdogan, E.
2016-12-01
The thermospheric neutral density plays a crucial role within the equation of motion of Earth orbiting objects since drag, lift or side forces are one of the largest non-gravitational perturbations acting on the satellite. Precise Orbit Determination (POD) methods can be used to estimate thermospheric density variations from measured orbit determinations. One method which provides highly accurate measurements of the satellite position is Satellite Laser Ranging (SLR). Within the POD process, scaling factors are estimated frequently. These scaling factors can be either used for the scaling of the so called satellite-specific drag (ballistic) coefficients or the integrated thermospheric neutral density. We present a method for analytically model the drag coefficient based on a couple of physical assumptions and key parameters. In this paper, we investigate the possibility to use SLR observations to the very low Earth orbiting satellite ANDE-Pollux (approximately at 350km altitude) to determine scaling factors for different a priori thermospheric density models. We perform a POD for ANDE-Pollux covering 49 days between August 2009 and September 2009 which means the time span containing the largest number of observations during the short lifetime of the satellite. Finally, we compare the obtained scaled thermospheric densities w.r.t. each other
Base Level Management of Radio Frequency Radiation Protection Program
1989-04-01
Antennae ....... 17 5 Estimated Hazard Distance for Vertical Monopole Antennae ....... 17 6 Permissible Exposure Limits...36 H-1 Monopole Antennas .............................................. 83 H-2 Radiation Pattern of Monopole Antennas...correction factors for determining power density values in the near-field of an emitter. Power Density = (4 x P av)/(Antenna Area) (14) For dipole, monopole
Density of transneptunian object 229762 2007 UK126
NASA Astrophysics Data System (ADS)
Grundy, Will
2017-08-01
Densities provide unique information about bulk composition and interior structure and are key to going beyond the skin-deep view offered by remote-sensing techniques based on photometry, spectroscopy, and polarimetry. They are known for a handful of the relict planetesimals that populate our Solar System's Kuiper belt, revealing intriguing differences between small and large bodies. More and better quality data are needed to address fundamental questions about how planetesimals form from nebular solids, and how distinct materials are distributed through the nebula. Masses from binary orbits are generally quite precise, but a problem afflicting many of the known densities is that they depend on size estimates from thermal emission observations, with large model-dependent uncertainties that dominate the error bars on density estimates. Stellar occultations can provide much more accurate sizes and thus densities, but they depend on fortuitous geometry and thus can only be done for a few particularly valuable binaries. We propose observations of a system where an accurate density can be determined: 229762 2007 UK126. An accurate size is already available from multiple stellar occultation chords. This proposal will determine the mass, and thus the density.
Ecosystem Carbon Storage in Alpine Grassland on the Qinghai Plateau
Liu, Shuli; Zhang, Fawei; Du, Yangong; Guo, Xiaowei; Lin, Li; Li, Yikang; Li, Qian; Cao, Guangmin
2016-01-01
The alpine grassland ecosystem can sequester a large quantity of carbon, yet its significance remains controversial owing to large uncertainties in the relative contributions of climate factors and grazing intensity. In this study we surveyed 115 sites to measure ecosystem carbon storage (both biomass and soil) in alpine grassland over the Qinghai Plateau during the peak growing season in 2011 and 2012. Our results revealed three key findings. (1) Total biomass carbon density ranged from 0.04 for alpine steppe to 2.80 kg C m-2 for alpine meadow. Median soil organic carbon (SOC) density was estimated to be 16.43 kg C m-2 in alpine grassland. Total ecosystem carbon density varied across sites and grassland types, from 1.95 to 28.56 kg C m-2. (2) Based on the median estimate, the total carbon storage of alpine grassland on the Qinghai Plateau was 5.14 Pg, of which 94% (4.85 Pg) was soil organic carbon. (3) Overall, we found that ecosystem carbon density was affected by both climate and grazing, but to different extents. Temperature and precipitation interaction significantly affected AGB carbon density in winter pasture, BGB carbon density in alpine meadow, and SOC density in alpine steppe. On the other hand, grazing intensity affected AGB carbon density in summer pasture, SOC density in alpine meadow and ecosystem carbon density in alpine grassland. Our results indicate that grazing intensity was the primary contributing factor controlling carbon storage at the sites tested and should be the primary consideration when accurately estimating the carbon storage in alpine grassland. PMID:27494253
Ecosystem Carbon Storage in Alpine Grassland on the Qinghai Plateau.
Liu, Shuli; Zhang, Fawei; Du, Yangong; Guo, Xiaowei; Lin, Li; Li, Yikang; Li, Qian; Cao, Guangmin
2016-01-01
The alpine grassland ecosystem can sequester a large quantity of carbon, yet its significance remains controversial owing to large uncertainties in the relative contributions of climate factors and grazing intensity. In this study we surveyed 115 sites to measure ecosystem carbon storage (both biomass and soil) in alpine grassland over the Qinghai Plateau during the peak growing season in 2011 and 2012. Our results revealed three key findings. (1) Total biomass carbon density ranged from 0.04 for alpine steppe to 2.80 kg C m-2 for alpine meadow. Median soil organic carbon (SOC) density was estimated to be 16.43 kg C m-2 in alpine grassland. Total ecosystem carbon density varied across sites and grassland types, from 1.95 to 28.56 kg C m-2. (2) Based on the median estimate, the total carbon storage of alpine grassland on the Qinghai Plateau was 5.14 Pg, of which 94% (4.85 Pg) was soil organic carbon. (3) Overall, we found that ecosystem carbon density was affected by both climate and grazing, but to different extents. Temperature and precipitation interaction significantly affected AGB carbon density in winter pasture, BGB carbon density in alpine meadow, and SOC density in alpine steppe. On the other hand, grazing intensity affected AGB carbon density in summer pasture, SOC density in alpine meadow and ecosystem carbon density in alpine grassland. Our results indicate that grazing intensity was the primary contributing factor controlling carbon storage at the sites tested and should be the primary consideration when accurately estimating the carbon storage in alpine grassland.
Bulk density of small meteoroids
NASA Astrophysics Data System (ADS)
Kikwaya, J.-B.; Campbell-Brown, M.; Brown, P. G.
2011-06-01
Aims: Here we report on precise metric and photometric observations of 107 optical meteors, which were simultaneously recorded at multiple stations using three different intensified video camera systems. The purpose is to estimate bulk meteoroid density, link small meteoroids to their parent bodies based on dynamical and physical density values expected for different small body populations, to better understand and explain the dynamical evolution of meteoroids after release from their parent bodies. Methods: The video systems used had image sizes ranging from 640 × 480 to 1360 × 1036 pixels, with pixel scales from 0.01° per pixel to 0.05° per pixel, and limiting meteor magnitudes ranging from Mv = +2.5 to +6.0. We find that 78% of our sample show noticeable deceleration, allowing more robust constraints to be placed on density estimates. The density of each meteoroid is estimated by simultaneously fitting the observed deceleration and lightcurve using a model based on thermal fragmentation, conservation of energy and momentum. The entire phase space of the model free parameters is explored for each event to find ranges of parameters which fit the observations within the measurement uncertainty. Results: (a) We have analysed our data by first associating each of our events with one of the five meteoroid classes. The average density of meteoroids whose orbits are asteroidal and chondritic (AC) is 4200 kg m-3 suggesting an asteroidal parentage, possibly related to the high-iron content population. Meteoroids with orbits belonging to Jupiter family comets (JFCs) have an average density of 3100 ± 300 kg m-3. This high density is found for all meteoroids with JFC-like orbits and supports the notion that the refractory material reported from the Stardust measurements of 81P/Wild 2 dust is common among the broader JFC population. This high density is also the average bulk density for the 4 meteoroids with orbits belonging to the Ecliptic shower-type class (ES) also related to JFCs. Both categories we suggest are chondritic based on their high bulk density. Meteoroids of HT (Halley type) orbits have a minimum bulk density value of 360+400-100 kg m-3 and a maximum value of 1510+400-900 kg m-3. This is consistent with many previous works which suggest bulk cometary meteoroid density is low. SA (Sun-approaching)-type meteoroids show a density spread from 1000 kg m-3 to 4000 kg m-3, reflecting multiple origins. (b) We found two different meteor showers in our sample: Perseids (10 meteoroids, ~11% of our sample) with an average bulk density of 620 kg m-3 and Northern Iota Aquariids (4 meteoroids) with an average bulk density of 3200 kg m-3, consistent with the notion that the NIA derive from 2P/Encke.
Cavity turnover and equilibrium cavity densities in a cottonwood bottomland
Sedgwick, James A.; Knopf, Fritz L.
1992-01-01
A fundamental factor regulating the numbers of secondary cavity nesting (SCN) birds is the number of extant cavities available for nesting. The number of available cavities may be thought of as being in an approximate equilibrium maintained by a very rough balance between recruitment and loss of cavities. Based on estimates of cavity recruitment and loss, we ascertained equilibrium cavity densities in a mature plains cottonwood (Populus sargentii) bottomland along the South Platte River in northeastern Colorado. Annual cavity recruitment, derived from density estimates of primary cavity nesting (PCN) birds and cavity excavation rates, was estimated to be 71-86 new cavities excavated/100 ha. Of 180 active cavities of 11 species of cavity-nesting birds found in 1985 and 1986, 83 were no longer usable by 1990, giving an average instantaneous rate of cavity loss of r = -0.230. From these values of cavity recruitment and cavity loss, equilibrium cavity density along the South Platte is 238-289 cavities/100 ha. This range of equilibrium cavity density is only slightly above the minimum of 205 cavities/100 ha required by SCN's and suggests that cavity availability may be limiting SCN densities along the South Platte River. We submit that snag management alone does not adequately address SCN habitat needs, and that cavity management, expressed in terms of cavity turnover and cavity densities, may be more useful.
Moran, M.S.; Kustas, William P.; Vidal, A.; Stannard, D.I.; Blanford, J.H.; Nichols, W.D.
1994-01-01
An interdisciplinary field experiment was conducted to study the water and energy balance of a semiarid rangeland watershed in southeast Arizona during the summer of 1990. Two subwatersheds, one grass dominated and the other shrub dominated, were selected for intensive study with ground-based remote sensing systems and hydrometeorological instrumentation. Surface energy balance was evaluated at both sites using direct and indirect measurements of the turbulent fluxes (eddy correlation, variance, and Bowen ratio methods) and using an aerodynamic approach based on remote measurements of surface reflectance and temperature and conventional meteorological information. Estimates of net radiant flux density (Rn), derived from measurements of air temperature, incoming solar radiation, and surface temperature and radiance compared well with values measured using a net radiometer (mean absolute difference (MAD) ≃ 50 W/m2 over a range from 115 to 670 W/m2). Soil heat flux density (G) was estimated using a relation between G/Rn and a spectral vegetation index computed from the red and near-infrared surface reflectance. These G estimates compared well with conventional measurements of G using buried soil heat flux plates (MAD ≃ 20 W/m2 over a range from −13 to 213 W/m2). In order to account for the effects of sparse vegetation, semiempirical adjustments to the single-layer bulk aerodynamic resistance approach were required for evaluation of sensible heat flux density (H). This yielded differences between measurements and remote estimates of H of approximately 33 W/m2 over a range from 13 to 303 W/m2. The resulting estimates of latent heat flux density, LE, were of the same magnitude and trend as measured values; however, a significant scatter was still observed: MAD ≃ 40 W/m2 over a range from 0 to 340 W/m2. Because LE was solved as a residual, there was a cumulative effect of errors associated with remote estimates of Rn, G, and H.
Body density and diving gas volume of the northern bottlenose whale (Hyperoodon ampullatus).
Miller, Patrick; Narazaki, Tomoko; Isojunno, Saana; Aoki, Kagari; Smout, Sophie; Sato, Katsufumi
2016-08-15
Diving lung volume and tissue density, reflecting lipid store volume, are important physiological parameters that have only been estimated for a few breath-hold diving species. We fitted 12 northern bottlenose whales with data loggers that recorded depth, 3-axis acceleration and speed either with a fly-wheel or from change of depth corrected by pitch angle. We fitted measured values of the change in speed during 5 s descent and ascent glides to a hydrodynamic model of drag and buoyancy forces using a Bayesian estimation framework. The resulting estimate of diving gas volume was 27.4±4.2 (95% credible interval, CI) ml kg(-1), closely matching the measured lung capacity of the species. Dive-by-dive variation in gas volume did not correlate with dive depth or duration. Estimated body densities of individuals ranged from 1028.4 to 1033.9 kg m(-3) at the sea surface, indicating overall negative tissue buoyancy of this species in seawater. Body density estimates were highly precise with ±95% CI ranging from 0.1 to 0.4 kg m(-3), which would equate to a precision of <0.5% of lipid content based upon extrapolation from the elephant seal. Six whales tagged near Jan Mayen (Norway, 71°N) had lower body density and were closer to neutral buoyancy than six whales tagged in the Gully (Nova Scotia, Canada, 44°N), a difference that was consistent with the amount of gliding observed during ascent versus descent phases in these animals. Implementation of this approach using longer-duration tags could be used to track longitudinal changes in body density and lipid store body condition of free-ranging cetaceans. © 2016. Published by The Company of Biologists Ltd.
Body density and diving gas volume of the northern bottlenose whale (Hyperoodon ampullatus)
Miller, Patrick; Narazaki, Tomoko; Isojunno, Saana; Aoki, Kagari; Smout, Sophie; Sato, Katsufumi
2016-01-01
ABSTRACT Diving lung volume and tissue density, reflecting lipid store volume, are important physiological parameters that have only been estimated for a few breath-hold diving species. We fitted 12 northern bottlenose whales with data loggers that recorded depth, 3-axis acceleration and speed either with a fly-wheel or from change of depth corrected by pitch angle. We fitted measured values of the change in speed during 5 s descent and ascent glides to a hydrodynamic model of drag and buoyancy forces using a Bayesian estimation framework. The resulting estimate of diving gas volume was 27.4±4.2 (95% credible interval, CI) ml kg−1, closely matching the measured lung capacity of the species. Dive-by-dive variation in gas volume did not correlate with dive depth or duration. Estimated body densities of individuals ranged from 1028.4 to 1033.9 kg m−3 at the sea surface, indicating overall negative tissue buoyancy of this species in seawater. Body density estimates were highly precise with ±95% CI ranging from 0.1 to 0.4 kg m−3, which would equate to a precision of <0.5% of lipid content based upon extrapolation from the elephant seal. Six whales tagged near Jan Mayen (Norway, 71°N) had lower body density and were closer to neutral buoyancy than six whales tagged in the Gully (Nova Scotia, Canada, 44°N), a difference that was consistent with the amount of gliding observed during ascent versus descent phases in these animals. Implementation of this approach using longer-duration tags could be used to track longitudinal changes in body density and lipid store body condition of free-ranging cetaceans. PMID:27296044
A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems
Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok
2018-01-01
Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621
Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure
NASA Astrophysics Data System (ADS)
Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang
2018-04-01
S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.
Earle-Richardson, Giulia B.; Brower, Melissa A.; Jones, Amanda M.; May, John J.; Jenkins, Paul L.
2008-01-01
Purpose To compare occupational morbidity estimates for migrant and seasonal farmworkers obtained from survey methods versus chart review methods, and to estimate the proportion of morbidity treated at federally recognized migrant health centers (MHCs) in a highly agricultural region of New York. Methods Researchers simultaneously conducted: a) an occupational injury and illness survey among agricultural workers; b) MHC chart review; and c) hospital emergency room (ER) chart reviews. Results Of the 24 injuries reported by 550 survey subjects, 54.2% received treatment MHCs 16.7% at ERs, 16.7% at some other facility, and 12.5% were untreated. For injuries treated at MHCs or ERs, the incidence density based on survey methods was 29.3 injuries per 10,000 worker-weeks versus 27.4 by chart review. The standardized morbidity ratio (SMR) for this comparison was 1.07 (95% CI = 0.65 – 1.77). Conclusion Survey data indicate that 71% of agricultural injury and illness can be captured with MHC and ER chart review. MHC and ER incidence density estimates show strong correspondence between the two methods. A chart review-based surveillance system, in conjunction with a correction factor based on periodic worker surveys, would provide a cost-effective estimate of the occupational illness and injury rate in this population. PMID:18063238
The quantitative polymerase chain reaction (qPCR) method provides rapid estimates of fecal indicator bacteria densities that have been indicated to be useful in the assessment of water quality. Primarily because this method provides faster results than standard culture-based meth...
Estimating forest canopy fuel parameters using LIDAR data.
Hans-Erik Andersen; Robert J. McGaughey; Stephen E. Reutebuch
2005-01-01
Fire researchers and resource managers are dependent upon accurate, spatially-explicit forest structure information to support the application of forest fire behavior models. In particular, reliable estimates of several critical forest canopy structure metrics, including canopy bulk density, canopy height, canopy fuel weight, and canopy base height, are required to...
Cummings, Steven R; Karpf, David B; Harris, Fran; Genant, Harry K; Ensrud, Kristine; LaCroix, Andrea Z; Black, Dennis M
2002-03-01
To estimate how much the improvement in bone mass accounts for the reduction in risk of vertebral fracture that has been observed in randomized trials of antiresorptive treatments for osteoporosis. After a systematic search, we conducted a meta-analysis of 12 trials to describe the relation between improvement in spine bone mineral density and reduction in risk of vertebral fracture in postmenopausal women. We also used logistic models to estimate the proportion of the reduction in risk of vertebral fracture observed with alendronate in the Fracture Intervention Trial that was due to improvement in bone mineral density. Across the 12 trials, a 1% improvement in spine bone mineral density was associated with a 0.03 decrease (95% confidence interval [CI]: 0.02 to 0.05) in the relative risk (RR) of vertebral fracture. The reductions in risk were greater than predicted from improvement in bone mineral density; for example, the model estimated that treatments predicted to reduce fracture risk by 20% (RR = 0.80), based on improvement in bone mineral density, actually reduce the risk of fracture by about 45% (RR = 0.55). In the Fracture Intervention Trial, improvement in spine bone mineral density explained 16% (95% CI: 11% to 27%) of the reduction in the risk of vertebral fracture with alendronate. Improvement in spine bone mineral density during treatment with antiresorptive drugs accounts for a predictable but small part of the observed reduction in the risk of vertebral fracture.
Negrete, Lindsey M.; Middleton, Michael S.; Clark, Lisa; Wolfson, Tanya; Gamst, Anthony C.; Lam, Jessica; Changchien, Chris; Deyoung-Dominguez, Ivan M.; Hamilton, Gavin; Loomba, Rohit; Schwimmer, Jeffrey; Sirlin, Claude B.
2013-01-01
Purpose To prospectively describe magnitude-based multi-echo gradient-echo hepatic proton density fat fraction (PDFF) inter-examination precision at 3T. Materials and Methods In this prospective, IRB approved, HIPAA compliant study, written informed consent was obtained from 29 subjects (body mass indexes > 30kg/m2). Three 3T magnetic resonance imaging (MRI) examinations were obtained over 75-90 minutes. Segmental, lobar, and whole liver PDFF were estimated (using three, four, five, or six echoes) by magnitude-based multi-echo MRI in co-localized regions of interest (ROIs). For estimate (using three, four, five, or six echoes), at each anatomic level (segmental, lobar, whole liver), three inter-examination precision metrics were computed: intra-class correlation coefficient (ICC), standard deviation (SD), and range. Results Magnitude-based PDFF estimates using each reconstruction method showed excellent inter-examination precision for each segment (ICC ≥ 0.992; SD ≤ 0.66%; range ≤ 1.24%), lobe (ICC ≥ 0.998; SD ≤ 0.34%; range ≤ 0.64%), and the whole liver (ICC = 0.999; SD ≤ 0.24%; range ≤ 0.45%). Inter-examination precision was unaffected by whether PDFF was estimated using three, four, five, or six echoes. Conclusion Magnitude-based PDFF estimation shows high inter-examination precision at segmental, lobar, and whole liver anatomic levels, supporting its use in clinical care or clinical trials. The results of this study suggest that longitudinal hepatic PDFF change greater than 1.6% is likely to represent signal rather than noise. PMID:24136736
NASA Astrophysics Data System (ADS)
Ghale, Purnima; Johnson, Harley T.
2018-06-01
We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
The impact of roads on the demography of grizzly bears in Alberta.
Boulanger, John; Stenhouse, Gordon B
2014-01-01
One of the principal factors that have reduced grizzly bear populations has been the creation of human access into grizzly bear habitat by roads built for resource extraction. Past studies have documented mortality and distributional changes of bears relative to roads but none have attempted to estimate the direct demographic impact of roads in terms of both survival rates, reproductive rates, and the interaction of reproductive state of female bears with survival rate. We applied a combination of survival and reproductive models to estimate demographic parameters for threatened grizzly bear populations in Alberta. Instead of attempting to estimate mean trend we explored factors which caused biological and spatial variation in population trend. We found that sex and age class survival was related to road density with subadult bears being most vulnerable to road-based mortality. A multi-state reproduction model found that females accompanied by cubs of the year and/or yearling cubs had lower survival rates compared to females with two year olds or no cubs. A demographic model found strong spatial gradients in population trend based upon road density. Threshold road densities needed to ensure population stability were estimated to further refine targets for population recovery of grizzly bears in Alberta. Models that considered lowered survival of females with dependant offspring resulted in lower road density thresholds to ensure stable bear populations. Our results demonstrate likely spatial variation in population trend and provide an example how demographic analysis can be used to refine and direct conservation measures for threatened species.
The Impact of Roads on the Demography of Grizzly Bears in Alberta
2014-01-01
One of the principal factors that have reduced grizzly bear populations has been the creation of human access into grizzly bear habitat by roads built for resource extraction. Past studies have documented mortality and distributional changes of bears relative to roads but none have attempted to estimate the direct demographic impact of roads in terms of both survival rates, reproductive rates, and the interaction of reproductive state of female bears with survival rate. We applied a combination of survival and reproductive models to estimate demographic parameters for threatened grizzly bear populations in Alberta. Instead of attempting to estimate mean trend we explored factors which caused biological and spatial variation in population trend. We found that sex and age class survival was related to road density with subadult bears being most vulnerable to road-based mortality. A multi-state reproduction model found that females accompanied by cubs of the year and/or yearling cubs had lower survival rates compared to females with two year olds or no cubs. A demographic model found strong spatial gradients in population trend based upon road density. Threshold road densities needed to ensure population stability were estimated to further refine targets for population recovery of grizzly bears in Alberta. Models that considered lowered survival of females with dependant offspring resulted in lower road density thresholds to ensure stable bear populations. Our results demonstrate likely spatial variation in population trend and provide an example how demographic analysis can be used to refine and direct conservation measures for threatened species. PMID:25532035
Model Parameterization and P-wave AVA Direct Inversion for Young's Impedance
NASA Astrophysics Data System (ADS)
Zong, Zhaoyun; Yin, Xingyao
2017-05-01
AVA inversion is an important tool for elastic parameters estimation to guide the lithology prediction and "sweet spot" identification of hydrocarbon reservoirs. The product of the Young's modulus and density (named as Young's impedance in this study) is known as an effective lithology and brittleness indicator of unconventional hydrocarbon reservoirs. Density is difficult to predict from seismic data, which renders the estimation of the Young's impedance inaccurate in conventional approaches. In this study, a pragmatic seismic AVA inversion approach with only P-wave pre-stack seismic data is proposed to estimate the Young's impedance to avoid the uncertainty brought by density. First, based on the linearized P-wave approximate reflectivity equation in terms of P-wave and S-wave moduli, the P-wave approximate reflectivity equation in terms of the Young's impedance is derived according to the relationship between P-wave modulus, S-wave modulus, Young's modulus and Poisson ratio. This equation is further compared to the exact Zoeppritz equation and the linearized P-wave approximate reflectivity equation in terms of P- and S-wave velocities and density, which illustrates that this equation is accurate enough to be used for AVA inversion when the incident angle is within the critical angle. Parameter sensitivity analysis illustrates that the high correlation between the Young's impedance and density render the estimation of the Young's impedance difficult. Therefore, a de-correlation scheme is used in the pragmatic AVA inversion with Bayesian inference to estimate Young's impedance only with pre-stack P-wave seismic data. Synthetic examples demonstrate that the proposed approach is able to predict the Young's impedance stably even with moderate noise and the field data examples verify the effectiveness of the proposed approach in Young's impedance estimation and "sweet spots" evaluation.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
NASA Astrophysics Data System (ADS)
Tesauro, Magdala; Kaban, Mikhail K.; Mooney, Walter D.; Cloetingh, Sierd A. P. L.
2014-12-01
Temperature and compositional variations of the North American (NA) lithospheric mantle are estimated using a new inversion technique introduced in Part 1, which allows us to jointly interpret seismic tomography and gravity data, taking into account depletion of the lithospheric mantle beneath the cratonic regions. The technique is tested using two tomography models (NA07 and SL2013sv) and different lithospheric density models. The first density model (Model I) reproduces the typical compositionally stratified lithospheric mantle, which is consistent with xenolith samples from the central Slave craton, while the second one (Model II) is based on the direct inversion of the residual gravity and residual topography. The results obtained, both in terms of temperature and composition, are more strongly influenced by the input models derived from seismic tomography, rather than by the choice of lithospheric density Model I versus Model II. The final temperatures estimated in the Archean lithospheric root are up to 150°C higher than in the initial thermal models obtained using a laterally and vertically uniform "fertile" compositional model and are in agreement with temperatures derived from xenolith data. Therefore, the effect of the compositional variations cannot be neglected when temperatures of the cratonic lithospheric mantle are estimated. Strong negative compositional density anomalies (<-0.03 g/cm3), corresponding to Mg # (100 × Mg/(Mg + Fe)) >92, characterize the lithospheric mantle of the northwestern part of the Superior craton and the central part of the Slave and Churchill craton, according to both tomographic models. The largest discrepancies between the results based on different tomography models are observed in the Proterozoic regions, such as the Trans Hudson Orogen (THO), Rocky Mountains, and Colorado Plateau, which appear weakly depleted (>-0.025 g/cm3 corresponding to Mg # ˜91) when model NA07 is used, or locally characterized by high-density bodies when model SL2013sv is used. The former results are in agreement with those based on the interpretation of xenolith data. The high-density bodies might be interpreted as fragments of subducted slabs or of the advection of the lithospheric mantle induced from the eastward-directed flat slab subduction. The selection of a seismic tomography model plays a significant role when estimating lithospheric density, temperature, and compositional heterogeneity. The consideration of the results of more than one model gives a more complete picture of the possible compositional variations within the NA lithospheric mantle.
NASA Astrophysics Data System (ADS)
Manuri, Solichin; Andersen, Hans-Erik; McGaughey, Robert J.; Brack, Cris
2017-04-01
The airborne lidar system (ALS) provides a means to efficiently monitor the status of remote tropical forests and continues to be the subject of intense evaluation. However, the cost of ALS acquisition can vary significantly depending on the acquisition parameters, particularly the return density (i.e., spatial resolution) of the lidar point cloud. This study assessed the effect of lidar return density on the accuracy of lidar metrics and regression models for estimating aboveground biomass (AGB) and basal area (BA) in tropical peat swamp forests (PSF) in Kalimantan, Indonesia. A large dataset of ALS covering an area of 123,000 ha was used in this study. This study found that cumulative return proportion (CRP) variables represent a better accumulation of AGB over tree heights than height-related variables. The CRP variables in power models explained 80.9% and 90.9% of the BA and AGB variations, respectively. Further, it was found that low-density (and low-cost) lidar should be considered as a feasible option for assessing AGB and BA in vast areas of flat, lowland PSF. The performance of the models generated using reduced return densities as low as 1/9 returns per m2 also yielded strong agreement with the original high-density data. The use model-based statistical inferences enabled relatively precise estimates of the mean AGB at the landscape scale to be obtained with a fairly low-density of 1/4 returns per m2, with less than 10% standard error (SE). Further, even when very low-density lidar data was used (i.e., 1/49 returns per m2) the bias of the mean AGB estimates were still less than 10% with a SE of approximately 15%. This study also investigated the influence of different DTM resolutions for normalizing the elevation during the generation of forest-related lidar metrics using various return densities point cloud. We found that the high-resolution digital terrain model (DTM) had little effect on the accuracy of lidar metrics calculation in PSF. The accuracy of low-density lidar metrics in PSF was more influenced by the density of aboveground returns, rather than the last return. This is due to the flat topography of the study area. The results of this study will be valuable for future economical and feasible assessments of forest metrics over large areas of tropical peat swamp ecosystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Brad M.; Nathan, Diane L.; Wang Yan
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') andmore » vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r= 0.82, p < 0.001) and processed (r= 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r= 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's {kappa}{>=} 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.« less
Keller, Brad M.; Nathan, Diane L.; Wang, Yan; Zheng, Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina
2012-01-01
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., “FOR PROCESSING”) and vendor postprocessed (i.e., “FOR PRESENTATION”), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies. PMID:22894417
Keller, Brad M; Nathan, Diane L; Wang, Yan; Zheng, Yuanjie; Gee, James C; Conant, Emily F; Kontos, Despina
2012-08-01
The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., "FOR PROCESSING") and vendor postprocessed (i.e., "FOR PRESENTATION"), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.
A Design Study of Onboard Navigation and Guidance During Aerocapture at Mars. M.S. Thesis
NASA Technical Reports Server (NTRS)
Fuhry, Douglas Paul
1988-01-01
The navigation and guidance of a high lift-to-drag ratio sample return vehicle during aerocapture at Mars are investigated. Emphasis is placed on integrated systems design, with guidance algorithm synthesis and analysis based on vehicle state and atmospheric density uncertainty estimates provided by the navigation system. The latter utilizes a Kalman filter for state vector estimation, with useful update information obtained through radar altimeter measurements and density altitude measurements based on IMU-measured drag acceleration. A three-phase guidance algorithm, featuring constant bank numeric predictor/corrector atmospheric capture and exit phases and an extended constant altitude cruise phase, is developed to provide controlled capture and depletion of orbital energy, orbital plane control, and exit apoapsis control. Integrated navigation and guidance systems performance are analyzed using a four degree-of-freedom computer simulation. The simulation environment includes an atmospheric density model with spatially correlated perturbations to provide realistic variations over the vehicle trajectory. Navigation filter initial conditions for the analysis are based on planetary approach optical navigation results. Results from a selection of test cases are presented to give insight into systems performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, H; Xing, L; Kanehira, T
2016-06-15
Purpose: The aim of this study is to evaluate the feasibility of using a dual-energy CBCT (DECBCT) in proton therapy treatment planning to allow for accurate electron density estimation. Methods: For direct comparison, two scenarios were selected: a dual-energy fan-beam CT (high: 140 kVp, low: 80 kVp) and a DECBCT (high: 125 kVp, low: 80 kVp). A Gammex 467 tissue characterization phantom was used, including the rods of air, water, bone (B2–30% mineral), cortical bone (SB3), lung (LN-300), brain, liver and adipose. For the CBCT, Hounsfield Unit (HU) numbers were first obtained from the reconstructed images after a calibration wasmore » made based on water (=0) and air materials (=−1000). For each tissue surrogate, region-of-interest (ROI) analyses were made to derive high-energy and low-energy HU values (HUhigh and HUlow), which were subsequently used to estimate electron density based on the algorithm as previously described by Hunemohr N., et al. Parameters k1 and k2 are energy dependent and can be derived from calibration materials. Results: While for the dual-energy FBCT, the electron density is found be within +/−3% error relative to the values provided by the phantom vendor: −1.8% (water), 0.03% (lung), 1.1% (brain), −2.82% (adipose), −0.49% (liver) and −1.89% (cortical bones). While for the DECBCT, the estimation of electron density exhibits a relatively larger variation: −1.76% (water), −36.7% (lung), −1.92% (brain), −3.43% (adipose), 8.1% (liver) and 9.5% (cortical bones). Conclusion: For DECBCT, the accuracy of electron density estimation is inferior to that of a FBCT, especially for materials of either low-density (lung) or high density (cortical bone) compared to water. Such limitation arises from inaccurate HU number derivation in a CBCT. Advanced scatter-correction and HU calibration routines, as well as the deployment of photon counting CT detectors need be investigated to minimize the difference between FBCT and CBCT.« less
Carroll, Raymond J; Delaigle, Aurore; Hall, Peter
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
Estimation of energy density of Li-S batteries with liquid and solid electrolytes
NASA Astrophysics Data System (ADS)
Li, Chunmei; Zhang, Heng; Otaegui, Laida; Singh, Gurpreet; Armand, Michel; Rodriguez-Martinez, Lide M.
2016-09-01
With the exponential growth of technology in mobile devices and the rapid expansion of electric vehicles into the market, it appears that the energy density of the state-of-the-art Li-ion batteries (LIBs) cannot satisfy the practical requirements. Sulfur has been one of the best cathode material choices due to its high charge storage (1675 mAh g-1), natural abundance and easy accessibility. In this paper, calculations are performed for different cell design parameters such as the active material loading, the amount/thickness of electrolyte, the sulfur utilization, etc. to predict the energy density of Li-S cells based on liquid, polymeric and ceramic electrolytes. It demonstrates that Li-S battery is most likely to be competitive in gravimetric energy density, but not volumetric energy density, with current technology, when comparing with LIBs. Furthermore, the cells with polymer and thin ceramic electrolytes show promising potential in terms of high gravimetric energy density, especially the cells with the polymer electrolyte. This estimation study of Li-S energy density can be used as a good guidance for controlling the key design parameters in order to get desirable energy density at cell-level.
A conceptual guide to detection probability for point counts and other count-based survey methods
D. Archibald McCallum
2005-01-01
Accurate and precise estimates of numbers of animals are vitally needed both to assess population status and to evaluate management decisions. Various methods exist for counting birds, but most of those used with territorial landbirds yield only indices, not true estimates of population size. The need for valid density estimates has spawned a number of models for...
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirmovich, E.G.; Shapiro, B.S.
1975-01-01
Simultaneous satellite measurements of electron density N/sub s/ and temperature (T/sub e/)/sub s/ at a height h/sub s/ above an observatory and ground-based observations are used to compute the total vertical electron density profiles N(h) and estimate the temperature of the ionospheric plasma. Four close time intervals after sunset were selected for analysis.
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope
Simcharoen, S.; Pattanavibool, A.; Karanth, K.U.; Nichols, J.D.; Kumar, N.S.
2007-01-01
We used capture-recapture analyses to estimate the density of a tiger Panthera tigris population in the tropical forests of Huai Kha Khaeng Wildlife Sanctuary, Thailand, from photographic capture histories of 15 distinct individuals. The closure test results (z = 0.39, P = 0.65) provided some evidence in support of the demographic closure assumption. Fit of eight plausible closed models to the data indicated more support for model Mh, which incorporates individual heterogeneity in capture probabilities. This model generated an average capture probability $\\hat p$ = 0.42 and an abundance estimate of $\\widehat{N}(\\widehat{SE}[\\widehat{N}])$ = 19 (9.65) tigers. The sampled area of $\\widehat{A}(W)(\\widehat{SE}[\\widehat{A}(W)])$ = 477.2 (58.24) km2 yielded a density estimate of $\\widehat{D}(\\widehat{SE}[\\widehat{D}])$ = 3.98 (0.51) tigers per 100 km2. Huai Kha Khaeng Wildlife Sanctuary could therefore hold 113 tigers and the entire Western Forest Complex c. 720 tigers. Although based on field protocols that constrained us to use sub-optimal analyses, this estimated tiger density is comparable to tiger densities in Indian reserves that support moderate prey abundances. However, tiger densities in well-protected Indian reserves with high prey abundances are three times higher. If given adequate protection we believe that the Western Forest Complex of Thailand could potentially harbour >2,000 wild tigers, highlighting its importance for global tiger conservation. The monitoring approaches we recommend here would be useful for managing this tiger population.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
Arthur, Aston L; Hoffmann, Ary A; Umina, Paul A
2015-10-01
A key component for spray decision-making in IPM programmes is the establishment of economic injury levels (EILs) and economic thresholds (ETs). We aimed to establish an EIL for the redlegged earth mite (Halotydeus destructor Tucker) on canola. Complex interactions between mite numbers, feeding damage and plant recovery were found, highlighting the challenges in linking H. destructor numbers to yield. A guide of 10 mites plant(-1) was established at the first-true-leaf stage; however, simple relationships were not evident at other crop development stages, making it difficult to establish reliable EILs based on mite number. Yield was, however, strongly associated with plant damage and plant densities, reflecting the impact of mite feeding damage and indicating a plant-based alternative for establishing thresholds for H. destructor. Drawing on data from multiple field trials, we show that plant densities below 30-40 plants m(-2) could be used as a proxy for mite damage when reliable estimates of mite densities are not possible. This plant-based threshold provides a practical tool that avoids the difficulties of accurately estimating mite densities. The approach may be applicable to other situations where production conditions are unpredictable and interactions between pests and plant hosts are complex. © 2015 Society of Chemical Industry.
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
Digital photography for urban street tree crown conditions
Neil A. Clark; Sang-Mook Lee; William A. Bechtold; Gregory A. Reams
2006-01-01
Crown variables such as height, diameter, live crown ratio, dieback, transparency, and density are all collected as part of the overall crown assessment (USDA 2004). Transparency and density are related to the amount of foliage and thus the photosynthetic potential of the tree. These measurements are both currently based on visual estimates and have been shown to be...
EnviroAtlas - New York, NY - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Paterson, NJ - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Fresno, CA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Green Bay, WI - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Des Moines, IA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Woodbine, IA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Phoenix, AZ - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Pittsburgh, PA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - New Bedford, MA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Milwaukee, WI - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Austin, TX - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Cleveland, OH - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Portland, ME - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Portland, OR - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Durham, NC - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Tampa, FL - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Memphis, TN - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Predicting Grizzly Bear Density in Western North America
Mowat, Garth; Heard, Douglas C.; Schwarz, Carl J.
2013-01-01
Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend. PMID:24367552
Predicting grizzly bear density in western North America.
Mowat, Garth; Heard, Douglas C; Schwarz, Carl J
2013-01-01
Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.
The effect of different methods to compute N on estimates of mixing in stratified flows
NASA Astrophysics Data System (ADS)
Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey
2017-11-01
The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
NASA Technical Reports Server (NTRS)
Tomei, B. A.; Smith, L. G.
1986-01-01
Sounding rockets equipped to monitor electron density and its fine structure were launched into the auroral and equatorial ionosphere in 1980 and 1983, respectively. The measurement electronics are based on the Langmuir probe and are described in detail. An approach to the spectral analysis of the density irregularities is addressed and a software algorithm implementing the approach is given. Preliminary results of the analysis are presented.
Use of geographic information systems in rabies vaccination campaigns.
Grisi-Filho, José Henrique de Hildebrand e; Amaku, Marcos; Dias, Ricardo Augusto; Montenegro Netto, Hildebrando; Paranhos, Noemia Tucunduva; Mendes, Maria Cristina Novo Campos; Ferreira Neto, José Soares; Ferreira, Fernando
2008-12-01
To develop a method to assist in the design and assessment of animal rabies control campaigns. A methodology was developed based on geographic information systems to estimate the animal (canine and feline) population and density per census tract and per subregion (known as "Subprefeituras") in the city of São Paulo (Southeastern Brazil) in 2002. The number of vaccination units in a given region was estimated to achieve a certain proportion of vaccination coverage. Census database was used for the human population, as well as estimates ratios of dog:inhabitant and cat:inhabitant. Estimated figures were 1,490,500 dogs and 226,954 cats in the city, i.e. an animal population density of 1138.14 owned animals per km(2). In the 2002 campaign, 926,462 were vaccinated, resulting in a vaccination coverage of 54%. The estimated number of vaccination units to be able to reach a 70%-vaccination coverage, by vaccinating 700 animals per unit on average, was 1,729. These estimates are presented as maps of animal density according to census tracts and "Subprefeituras". The methodology used in the study may be applied in a systematic way to the design and evaluation of rabies vaccination campaigns, enabling the identification of areas of critical vaccination coverage.
Application of Accelerometer Data to Mars Odyssey Aerobraking and Atmospheric Modeling
NASA Technical Reports Server (NTRS)
Tolson, R. H.; Keating, G. M.; George, B. E.; Escalera, P. E.; Werner, M. R.; Dwyer, A. M.; Hanna, J. L.
2002-01-01
Aerobraking was an enabling technology for the Mars Odyssey mission even though it involved risk due primarily to the variability of the Mars upper atmosphere. Consequently, numerous analyses based on various data types were performed during operations to reduce these risk and among these data were measurements from spacecraft accelerometers. This paper reports on the use of accelerometer data for determining atmospheric density during Odyssey aerobraking operations. Acceleration was measured along three orthogonal axes, although only data from the component along the axis nominally into the flow was used during operations. For a one second count time, the RMS noise level varied from 0.07 to 0.5 mm/s2 permitting density recovery to between 0.15 and 1.1 kg per cu km or about 2% of the mean density at periapsis during aerobraking. Accelerometer data were analyzed in near real time to provide estimates of density at periapsis, maximum density, density scale height, latitudinal gradient, longitudinal wave variations and location of the polar vortex. Summaries are given of the aerobraking phase of the mission, the accelerometer data analysis methods and operational procedures, some applications to determining thermospheric properties, and some remaining issues on interpretation of the data. Pre-flight estimates of natural variability based on Mars Global Surveyor accelerometer measurements proved reliable in the mid-latitudes, but overestimated the variability inside the polar vortex.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
Ehlers Smith, David A; Ehlers Smith, Yvette C
2013-08-01
Because of the large-scale destruction of Borneo's rainforests on mineral soils, tropical peat-swamp forests (TPSFs) are increasingly essential for conserving remnant biodiversity, particularly in the lowlands where the majority of habitat conversion has occurred. Consequently, effective strategies for biodiversity conservation are required, which rely on accurate population density and distribution estimates as a baseline. We sought to establish the first population density estimates of the endemic red langur (Presbytis rubicunda) in Sabangau TPSF, the largest remaining contiguous lowland forest-block on Borneo. Using Distance sampling principles, we conducted line transect surveys in two of Sabangau's three principle habitat sub-classes and calculated group density at 2.52 groups km⁻² (95% CI 1.56-4.08) in the mixed-swamp forest sub-class. Based on an average recorded group size of 6.95 individuals, population density was 17.51 ind km⁻², the second highest density recorded in this species. The accessible area of the tall-interior forest, however, was too disturbed to yield density estimates representative of the entire sub-class, and P. rubicunda was absent from the low-pole forest, likely as a result of the low availability of the species' preferred foods. This absence in 30% of Sabangau's total area indicates the importance of in situ population surveys at the habitat-specific level for accurately informing conservation strategies. We highlight the conservation value of TPSFs for P. rubicunda given the high population density and large areas remaining, and recommend 1) quantifying the response of P. rubicunda to the logging and burning of its habitats; 2) surveying degraded TPSFs for viable populations, and 3) effectively delineating TPSF sub-class boundaries from remote imagery to facilitate population estimates across the wider peat landscape, given the stark contrast in densities found across the habitat sub-classes of Sabangau. © 2013 Wiley Periodicals, Inc.
Unbiased estimators for spatial distribution functions of classical fluids
NASA Astrophysics Data System (ADS)
Adib, Artur B.; Jarzynski, Christopher
2005-01-01
We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.
Effects of LiDAR point density and landscape context on estimates of urban forest biomass
NASA Astrophysics Data System (ADS)
Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.
2015-03-01
Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.
The ATLASGAL survey: distribution of cold dust in the Galactic plane. Combination with Planck data
NASA Astrophysics Data System (ADS)
Csengeri, T.; Weiss, A.; Wyrowski, F.; Menten, K. M.; Urquhart, J. S.; Leurini, S.; Schuller, F.; Beuther, H.; Bontemps, S.; Bronfman, L.; Henning, Th.; Schneider, N.
2016-01-01
Context. Sensitive ground-based submillimeter surveys, such as ATLASGAL, provide a global view on the distribution of cold dense gas in the Galactic plane at up to two-times better angular-resolution compared to recent space-based surveys with Herschel. However, a drawback of ground-based continuum observations is that they intrinsically filter emission, at angular scales larger than a fraction of the field-of-view of the array, when subtracting the sky noise in the data processing. The lost information on the distribution of diffuse emission can be, however, recovered from space-based, all-sky surveys with Planck. Aims: Here we aim to demonstrate how this information can be used to complement ground-based bolometer data and present reprocessed maps of the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) survey. Methods: We use the maps at 353 GHz from the Planck/HFI instrument, which performed a high sensitivity all-sky survey at a frequency close to that of the APEX/LABOCA array, which is centred on 345 GHz. Complementing the ground-based observations with information on larger angular scales, the resulting maps reveal the distribution of cold dust in the inner Galaxy with a larger spatial dynamic range. We visually describe the observed features and assess the global properties of dust distribution. Results: Adding information from large angular scales helps to better identify the global properties of the cold Galactic interstellar medium. To illustrate this, we provide mass estimates from the dust towards the W43 star-forming region and estimate a column density contrast of at least a factor of five between a low intensity halo and the star-forming ridge. We also show examples of elongated structures extending over angular scales of 0.5°, which we refer to as thin giant filaments. Corresponding to > 30 pc structures in projection at a distance of 3 kpc, these dust lanes are very extended and show large aspect ratios. We assess the fraction of dense gas by determining the contribution of the APEX/LABOCA maps to the combined maps, and estimate 2-5% for the dense gas fraction (corresponding to Av> 7 mag) on average in the Galactic plane. We also show probability distribution functions of the column density (N-PDF), which reveal the typically observed log-normal distribution for low column density and exhibit an excess at high column densities. As a reference for extragalactic studies, we show the line-of-sight integrated N-PDF of the inner Galaxy, and derive a contribution of this excess to the total column density of ~ 2.2%, corresponding to NH2 = 2.92 × 1022 cm-2. Taking the total flux density observed in the maps, we provide an independent estimate of the mass of molecular gas in the inner Galaxy of ~ 1 × 109 M⊙, which is consistent with previous estimates using CO emission. From the mass and dense gas fraction (fDG), we estimate a Galactic SFR of Ṁ = 1.3 M⊙ yr-1. Conclusions: Retrieving the extended emission helps to better identify massive giant filaments which are elongated and confined structures. We show that the log-normal distribution of low column density gas is ubiquitous in the inner Galaxy. While the distribution of diffuse gas is relatively homogenous in the inner Galaxy, the central molecular zone (CMZ) stands out with a higher dense gas fraction despite its low star formation efficiency.Altogether our findings explain well the observed low star formation efficiency of the Milky Way by the low fDG in the Galactic ISM. In contrast, the high fDG observed towards the CMZ, despite its low star formation activity, suggests that, in that particular region of our Galaxy, high density gas is not the bottleneck for star formation.
NASA Technical Reports Server (NTRS)
Depaola, B. D.; Marcum, S. D.; Wrench, H. K.; Whitten, B. L.; Wells, W. E.
1979-01-01
It is very useful to have a method of estimation for electron temperature and electron densities in nuclear pumped plasmas because measurements of such quantities are very difficult. This paper describes a method, based on rate equation analysis of the ionized species in the plasma and the electron energy balance. In addition to the ionized species, certain neutral species must also be calculated. Examples are given for pure helium and a mixture of helium and argon. In the HeAr case, He(+), He2(+), He/2 3S/, Ar(+), Ar2(+), and excited Ar are evaluated.
Estimation of Muscle Force Based on Neural Drive in a Hemispheric Stroke Survivor.
Dai, Chenyun; Zheng, Yang; Hu, Xiaogang
2018-01-01
Robotic assistant-based therapy holds great promise to improve the functional recovery of stroke survivors. Numerous neural-machine interface techniques have been used to decode the intended movement to control robotic systems for rehabilitation therapies. In this case report, we tested the feasibility of estimating finger extensor muscle forces of a stroke survivor, based on the decoded descending neural drive through population motoneuron discharge timings. Motoneuron discharge events were obtained by decomposing high-density surface electromyogram (sEMG) signals of the finger extensor muscle. The neural drive was extracted from the normalized frequency of the composite discharge of the motoneuron pool. The neural-drive-based estimation was also compared with the classic myoelectric-based estimation. Our results showed that the neural-drive-based approach can better predict the force output, quantified by lower estimation errors and higher correlations with the muscle force, compared with the myoelectric-based estimation. Our findings suggest that the neural-drive-based approach can potentially be used as a more robust interface signal for robotic therapies during the stroke rehabilitation.
Real-time reflectometry measurement validation in H-mode regimes for plasma position control.
Santos, J; Guimarais, L; Manso, M
2010-10-01
It has been shown that in H-mode regimes, reflectometry electron density profiles and an estimate for the density at the separatrix can be jointly used to track the separatrix within the precision required for plasma position control on ITER. We present a method to automatically remove, from the position estimation procedure, measurements performed during collapse and recovery phases of edge localized modes (ELMs). Based on the rejection mechanism, the method also produces an estimate confidence value to be fed to the position feedback controller. Preliminary results show that the method improves the real-time experimental separatrix tracking capabilities and has the potential to eliminate the need for an external online source of ELM event signaling during control feedback operation.
Forest Stand Canopy Structure Attribute Estimation from High Resolution Digital Airborne Imagery
Demetrios Gatziolis
2006-01-01
A study of forest stand canopy variable assessment using digital, airborne, multispectral imagery is presented. Variable estimation involves stem density, canopy closure, and mean crown diameter, and it is based on quantification of spatial autocorrelation among pixel digital numbers (DN) using variogram analysis and an alternative, non-parametric approach known as...
4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Feza; Arikan, Orhan
2016-07-01
Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
A shock-capturing SPH scheme based on adaptive kernel estimation
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime
2006-02-01
Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.
Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki
2017-01-01
This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974
The Mass of Saturn's B ring from hidden density waves
NASA Astrophysics Data System (ADS)
Hedman, M. M.; Nicholson, P. D.
2015-12-01
The B ring is Saturn's brightest and most opaque ring, but many of its fundamental parameters, including its total mass, are not well constrained. Elsewhere in the rings, the best mass density estimates come from spiral waves driven by mean-motion resonances with Saturn's various moons, but such waves have been hard to find in the B ring. We have developed a new wavelet-based technique, for combining data from multiple stellar occultations that allows us to isolate the density wave signals from other ring structures. This method has been applied to 5 density waves using 17 occultations of the star gamma Crucis observed by the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft. Two of these waves (generated by the Janus 2:1 and Mimas 5:2 Inner Lindblad Resonances) are visible in individual occultation profiles, but the other three wave signatures ( associated with the Janus 3:2, Enceladus 3:1 and Pandora 3:2 Inner Lindblad Resonances ) are not visible in individual profiles and can only be detected in the combined dataset. Estimates of the ring's surface mass density derived from these five waves fall between 40 and 140 g/cm^2. Surprisingly, these mass density estimates show no obvious correlation with the ring's optical depth. Furthermore, these data indicate that the total mass of the B ring is probably between one-third and two-thirds the mass of Saturn's moon Mimas.
NASA Astrophysics Data System (ADS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato
2017-12-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.
NASA Astrophysics Data System (ADS)
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
How we compute N matters to estimates of mixing in stratified flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.
We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less
How we compute N matters to estimates of mixing in stratified flows
Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.; ...
2017-10-13
We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less
Jiao, Jichao; Li, Fei; Deng, Zhongliang; Ma, Wenjing
2017-03-28
Considering the installation cost and coverage, the received signal strength indicator (RSSI)-based indoor positioning system is widely used across the world. However, the indoor positioning performance, due to the interference of wireless signals that are caused by the complex indoor environment that includes a crowded population, cannot achieve the demands of indoor location-based services. In this paper, we focus on increasing the signal strength estimation accuracy considering the population density, which is different to the other RSSI-based indoor positioning methods. Therefore, we propose a new wireless signal compensation model considering the population density, distance, and frequency. First of all, the number of individuals in an indoor crowded scenario can be calculated by our convolutional neural network (CNN)-based human detection approach. Then, the relationship between the population density and the signal attenuation is described in our model. Finally, we use the trilateral positioning principle to realize the pedestrian location. According to the simulation and tests in the crowded scenarios, the proposed model increases the accuracy of the signal strength estimation by 1.53 times compared to that without considering the human body. Therefore, the localization accuracy is less than 1.37 m, which indicates that our algorithm can improve the indoor positioning performance and is superior to other RSSI models.
Grant M. Domke; Christopher W. Woodall; James E. Smith
2012-01-01
Until recently, standing dead tree biomass and carbon (C) has been estimated as a function of live tree growing stock volume in the U.S. Forest Service, Forest Inventory and Analysis (FIA) Program. Traditional estimates of standing dead tree biomass/C attributes were based on merchantability standards that did not reflect density reductions or structural loss due to...
Plasma-based wakefield accelerators as sources of axion-like particles
NASA Astrophysics Data System (ADS)
Burton, David A.; Noble, Adam
2018-03-01
We estimate the average flux density of minimally-coupled axion-like particles (ALPs) generated by a laser-driven plasma wakefield propagating along a constant strong magnetic field. Our calculations suggest that a terrestrial source based on this approach could generate a pulse of ALPs whose flux density is comparable to that of solar ALPs at Earth. This mechanism is optimal for ALPs with mass in the range of interest of contemporary experiments designed to detect dark matter using microwave cavities.
Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis
NASA Astrophysics Data System (ADS)
Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles
2018-03-01
We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.
Martins, Mónia A R; Neves, Catarina M S S; Kurnia, Kiki A; Carvalho, Pedro J; Rocha, Marisa A A; Santos, Luís M N B F; Pinho, Simão P; Freire, Mara G
2016-01-15
In order to evaluate the impact of the alkyl side chain length and symmetry of the cation on the thermophysical properties of water-saturated ionic liquids (ILs), densities and viscosities as a function of temperature were measured at atmospheric pressure and in the (298.15 to 363.15) K temperature range, for systems containing two series of bis(trifluoromethylsulfonyl)imide-based compounds: the symmetric [C n C n im][NTf 2 ] (with n = 1-8 and 10) and asymmetric [C n C 1 im][NTf 2 ] (with n = 2-5, 7, 9 and 11) ILs. For water-saturated ILs, the density decreases with the increase of the alkyl side chain length while the viscosity increases with the size of the aliphatic tails. The saturation water solubility in each IL was further estimated with a reasonable agreement based on the densities of water-saturated ILs, further confirming that for the ILs investigated the volumetric mixing properties of ILs and water follow a near ideal behaviour. The water-saturated symmetric ILs generally present lower densities and viscosities than their asymmetric counterparts. From the experimental data, the isobaric thermal expansion coefficient and energy barrier were also estimated. A close correlation between the difference in the energy barrier values between the water-saturated and pure ILs and the water content in each IL was found, supporting that the decrease in the viscosity of ILs in presence of water is directly related with the decrease of the energy barrier.
Modelling coronal electron density and temperature profiles of the Active Region NOAA 11855
NASA Astrophysics Data System (ADS)
Rodríguez Gómez, J. M.; Antunes Vieira, L. E.; Dal Lago, A.; Palacios, J.; Balmaceda, L. A.; Stekel, T.
2017-10-01
The magnetic flux emergence can help understand the physical mechanism responsible for solar atmospheric phenomena. Emerging magnetic flux is frequently related to eruptive events, because when emerging they can reconnected with the ambient field and release magnetic energy. We will use a physic-based model to reconstruct the evolution of the solar emission based on the configuration of the photospheric magnetic field. The structure of the coronal magnetic field is estimated by employing force-free extrapolation NLFFF based on vector magnetic field products (SHARPS) observed by HMI instrument aboard SDO spacecraft from Sept. 29 (2013) to Oct. 07 (2013). The coronal plasma temperature and density are described and the emission is estimated using the CHIANTI atomic database 8.0. The performance of the our model is compared to the integrated emission from the AIA instrument aboard SDO spacecraft in the specific wavelengths 171Å and 304Å.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonneville, Alain H.; Kouzes, Richard T.
Imaging subsurface geological formations, oil and gas reservoirs, mineral deposits, cavities or magma chambers under active volcanoes has been for many years a major quest of geophysicists and geologists. Since these objects cannot be observed directly, different indirect geophysical methods have been developed. They are all based on variations of certain physical properties of the subsurface that can be detected from the ground surface or from boreholes. Electrical resistivity, seismic wave’s velocities and density are certainly the most used properties. If we look at density, indirect estimates of density distributions are performed currently by seismic reflection methods - since themore » velocity of seismic waves depend also on density - but they are expensive and discontinuous in time. Direct estimates of density are performed using gravimetric data looking at variations of the gravity field induced by the density variations at depth but this is not sufficiently accurate. A new imaging technique using cosmic-ray muon detectors has emerged during the last decade and muon tomography - or muography - promises to provide, for the first time, a complete and precise image of the density distribution in the subsurface. Further, this novel approach has the potential to become a direct, real-time, and low-cost method for monitoring fluid displacement in subsurface reservoirs.« less
The scaling of contact rates with population density for the infectious disease models.
Hu, Hao; Nigmatulina, Karima; Eckhoff, Philip
2013-08-01
Contact rates and patterns among individuals in a geographic area drive transmission of directly-transmitted pathogens, making it essential to understand and estimate contacts for simulation of disease dynamics. Under the uniform mixing assumption, one of two mechanisms is typically used to describe the relation between contact rate and population density: density-dependent or frequency-dependent. Based on existing evidence of population threshold and human mobility patterns, we formulated a spatial contact model to describe the appropriate form of transmission with initial growth at low density and saturation at higher density. We show that the two mechanisms are extreme cases that do not capture real population movement across all scales. Empirical data of human and wildlife diseases indicate that a nonlinear function may work better when looking at the full spectrum of densities. This estimation can be applied to large areas with population mixing in general activities. For crowds with unusually large densities (e.g., transportation terminals, stadiums, or mass gatherings), the lack of organized social contact structure deviates the physical contacts towards a special case of the spatial contact model - the dynamics of kinetic gas molecule collision. In this case, an ideal gas model with van der Waals correction fits well; existing movement observation data and the contact rate between individuals is estimated using kinetic theory. A complete picture of contact rate scaling with population density may help clarify the definition of transmission rates in heterogeneous, large-scale spatial systems. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, Ludovic, E-mail: ludohumberto@gmail.com; Hazrati Marangalou, Javad; Rietbergen, Bert van
Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was usedmore » as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm{sup 3}) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm{sup 3}), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm{sup 3}) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm{sup 3}). A trend for the cortical thickness and density estimation errors to increase with voxel size was observed and was more pronounced for thin cortices. Using clinical CT data for 19 of the 23 samples, mean errors of 0.18 ± 0.24 mm for the cortical thickness and 15 ± 106 mg/cm{sup 3} for the density were found. The case-control study showed that osteoporotic patients had a thinner cortex and a lower cortical density, with average differences of −0.8 mm and −58.6 mg/cm{sup 3} at the proximal femur in comparison with age-matched controls (p-value < 0.001). Conclusions: This method might be a promising approach for the quantification of cortical bone thickness and density using clinical routine imaging techniques. Future work will concentrate on investigating how this approach can improve the estimation of mechanical strength of bony structures, the prevention of fracture, and the management of osteoporosis.« less
Infrared thermography for wood density estimation
NASA Astrophysics Data System (ADS)
López, Gamaliel; Basterra, Luis-Alfonso; Acuña, Luis
2018-03-01
Infrared thermography (IRT) is becoming a commonly used technique to non-destructively inspect and evaluate wood structures. Based on the radiation emitted by all objects, this technique enables the remote visualization of the surface temperature without making contact using a thermographic device. The process of transforming radiant energy into temperature depends on many parameters, and interpreting the results is usually complicated. However, some works have analyzed the operation of IRT and expanded its applications, as found in the latest literature. This work analyzes the effect of density on the thermodynamic behavior of timber to be determined by IRT. The cooling of various wood samples has been registered, and a statistical procedure that enables one to quantitatively estimate the density of timber has been designed. This procedure represents a new method to physically characterize this material.
Optimal estimation for the satellite attitude using star tracker measurements
NASA Technical Reports Server (NTRS)
Lo, J. T.-H.
1986-01-01
An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.
Robust estimation of mammographic breast density: a patient-based approach
NASA Astrophysics Data System (ADS)
Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas
2012-02-01
Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).
NASA Astrophysics Data System (ADS)
Bai, Cheng-lin; Cheng, Zhi-hui
2016-09-01
In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.
Genomic selection in plant breeding
USDA-ARS?s Scientific Manuscript database
Genomic selection (GS) is a method to predict the genetic value of selection candidates based on the genomic estimated breeding value (GEBV) predicted from high-density markers positioned throughout the genome. Unlike marker-assisted selection, the GEBV is based on all markers including both minor ...
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
Scott, Michael L.; Reynolds, Elizabeth W.
2007-01-01
Compared to 5-m by 20-m tree quadrats, belt transects were shown to provide similar estimates of stand structure (stem density and stand basal area) in less than 30 percent of the time. Further, for the streams sampled, there were no statistically significant differences in stem density and basal area estimates between 10-m and 20-m belt transects and the smaller belts took approximately half the time to sample. There was, however, high variance associated with estimates of stand structure for infrequently occurring stems, such as large, relict or legacy riparian trees. Legacy riparian trees occurred in limited numbers at all sites sampled. A reachscale population census of these trees indicated that the 10-m belt transects tended to underestimate both stem density and basal area for these riparian forest elements and that a complete reach-scale census of legacy trees averaged less than one hour per site.
Three methods for estimating a range of vehicular interactions
NASA Astrophysics Data System (ADS)
Krbálek, Milan; Apeltauer, Jiří; Apeltauer, Tomáš; Szabová, Zuzana
2018-02-01
We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.
Detailed gravity anomalies from Geos 3 satellite altimetry data
NASA Technical Reports Server (NTRS)
Gopalapillai, G. S.; Mourad, A. G.
1979-01-01
Detailed gravity anomalies are computed from a combination of Geos 3 satellite altimeter and terrestrial gravity data using least-squares principles. The mathematical model used is based on the Stokes' equation modified for a nonglobal solution. Using Geos 3 data in the calibration area, the effects of several anomaly parameter configurations and data densities/distributions on the anomalies and their accuracy estimates are studied. The accuracy estimates for 1 deg x 1 deg mean anomalies from low density altimetry data are of the order of 4 mgal. Comparison of these anomalies with the terrestrial data and also with Rapp's data derived using collocation techniques show rms differences of 7.2 and 4.9 mgal, respectively. Indications are that the anomaly accuracies can be improved to about 2 mgal with high density data. Estimation of 30 in. x 30 in. mean anomalies indicates accuracies of the order of 5 mgal. Proper verification of these results will be possible only when accurate ground truth data become available.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
In this study, we used data from an OPC, and LOPC, and vertical net tows to estimate densities and describe the day/night vertical distribution of Mysis at a series of stations distributed throughout Lake Superior, and to evaluate the efficacy of using (L)OPC for examining DVM of...
Outside and inside noise exposure in urban and suburban areas
Dwight E. Bishop; Myles A. Simpson
1977-01-01
In urban and suburban areas of the United States (away from major airports), the outdoor noise environment usually depends strongly on local vehicular traffic. By relating traffic flow to population density, a model of outdoor noise exposure has been developed for estimating the cumulative 24-hour noise exposure based upon the population density of the area. This noise...
The detectability of brown dwarfs - Predictions and uncertainties
NASA Technical Reports Server (NTRS)
Nelson, L. A.; Rappaport, S.; Joss, P. C.
1993-01-01
In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.
Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R
2017-02-14
Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.
Measurement of Average Aggregate Density by Sedimentation and Brownian Motion Analysis.
Cavicchi, Richard E; King, Jason; Ripple, Dean C
2018-05-01
The spatially averaged density of protein aggregates is an important parameter that can be used to relate size distributions measured by orthogonal methods, to characterize protein particles, and perhaps to estimate the amount of protein in aggregate form in a sample. We obtained a series of images of protein aggregates exhibiting Brownian diffusion while settling under the influence of gravity in a sealed capillary. The aggregates were formed by stir-stressing a monoclonal antibody (NISTmAb). Image processing yielded particle tracks, which were then examined to determine settling velocity and hydrodynamic diameter down to 1 μm based on mean square displacement analysis. Measurements on polystyrene calibration microspheres ranging in size from 1 to 5 μm showed that the mean square displacement diameter had improved accuracy over the diameter derived from imaged particle area, suggesting a future method for correcting size distributions based on imaging. Stokes' law was used to estimate the density of each particle. It was found that the aggregates were highly porous with density decreasing from 1.080 to 1.028 g/cm 3 as the size increased from 1.37 to 4.9 μm. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Hayden, T. G.; Kominz, M. A.; Magens, D.; Niessen, F.
2009-12-01
We have estimated ice thicknesses at the AND-1B core during the Last Glacial Maximum by adapting an existing technique to calculate overburden. As ice thickness at Last Glacial Maximum is unknown in existing ice sheet reconstructions, this analysis provides constraint on model predictions. We analyze the porosity as a function of depth and lithology from measurements taken on the AND-1B core, and compare these results to a global dataset of marine, normally compacted sediments compiled from various legs of ODP and IODP. Using this dataset we are able to estimate the amount of overburden required to compact the sediments to the porosity observed in AND-1B. This analysis is a function of lithology, depth and porosity, and generates estimates ranging from zero to 1,000 meters. These overburden estimates are based on individual lithologies, and are translated into ice thickness estimates by accounting for both sediment and ice densities. To do this we use a simple relationship of Xover * (ρsed/ρice) = Xice; where Xover is the overburden thickness, ρsed is sediment density (calculated from lithology and porosity), ρice is the density of glacial ice (taken as 0.85g/cm3), and Xice is the equalivant ice thickness. The final estimates vary considerably, however the “Best Estimate” behavior of the 2 lithologies most likely to compact consistently is remarkably similar. These lithologies are the clay and silt units (Facies 2a/2b) and the diatomite units (Facies 1a) of AND-1B. These lithologies both produce best estimates of approximately 1,000 meters of ice during Last Glacial Maximum. Additionally, while there is a large range of possible values, no combination of reasonable lithology, compaction, sediment density, or ice density values result in an estimate exceeding 1,900 meters of ice. This analysis only applies to ice thicknesses during Last Glacial Maximum, due to the overprinting effect of Last Glacial Maximum on previous ice advances. Analysis of the AND-2A core is underway, and results will be compared to those of AND-1B.
NASA Astrophysics Data System (ADS)
Hu, Y.; Vaughan, M.; McClain, C.; Behrenfeld, M.; Maring, H.; Anderson, D.; Sun-Mack, S.; Flittner, D.; Huang, J.; Wielicki, B.; Minnis, P.; Weimer, C.; Trepte, C.; Kuehn, R.
2007-03-01
This study presents an empirical relation that links layer integrated depolarization ratios, the extinction coefficients, and effective radii of water clouds, based on Monte Carlo simulations of CALIPSO lidar observations. Combined with cloud effective radius retrieved from MODIS, cloud liquid water content and effective number density of water clouds are estimated from CALIPSO lidar depolarization measurements in this study. Global statistics of the cloud liquid water content and effective number density are presented.
Yu, Dapao; Wang, Xiaoyu; Yin, You; Zhan, Jinyu; Lewis, Bernard J.; Tian, Jie; Bao, Ye; Zhou, Wangming; Zhou, Li; Dai, Limin
2014-01-01
Accurate estimates of forest carbon storage and changes in storage capacity are critical for scientific assessment of the effects of forest management on the role of forests as carbon sinks. Up to now, several studies reported forest biomass carbon (FBC) in Liaoning Province based on data from China's Continuous Forest Inventory, however, their accuracy were still not known. This study compared estimates of FBC in Liaoning Province derived from different methods. We found substantial variation in estimates of FBC storage for young and middle-age forests. For provincial forests with high proportions in these age classes, the continuous biomass expansion factor method (CBM) by forest type with age class is more accurate and therefore more appropriate for estimating forest biomass. Based on the above approach designed for this study, forests in Liaoning Province were found to be a carbon sink, with carbon stocks increasing from 63.0 TgC in 1980 to 120.9 TgC in 2010, reflecting an annual increase of 1.9 TgC. The average carbon density of forest biomass in the province has increased from 26.2 Mg ha−1 in 1980 to 31.0 Mg ha−1 in 2010. While the largest FBC occurred in middle-age forests, the average carbon density decreased in this age class during these three decades. The increase in forest carbon density resulted primarily from the increased area and carbon storage of mature forests. The relatively long age interval in each age class for slow-growing forest types increased the uncertainty of FBC estimates by CBM-forest type with age class, and further studies should devote more attention to the time span of age classes in establishing biomass expansion factors for use in CBM calculations. PMID:24586881
Yu, Dapao; Wang, Xiaoyu; Yin, You; Zhan, Jinyu; Lewis, Bernard J; Tian, Jie; Bao, Ye; Zhou, Wangming; Zhou, Li; Dai, Limin
2014-01-01
Accurate estimates of forest carbon storage and changes in storage capacity are critical for scientific assessment of the effects of forest management on the role of forests as carbon sinks. Up to now, several studies reported forest biomass carbon (FBC) in Liaoning Province based on data from China's Continuous Forest Inventory, however, their accuracy were still not known. This study compared estimates of FBC in Liaoning Province derived from different methods. We found substantial variation in estimates of FBC storage for young and middle-age forests. For provincial forests with high proportions in these age classes, the continuous biomass expansion factor method (CBM) by forest type with age class is more accurate and therefore more appropriate for estimating forest biomass. Based on the above approach designed for this study, forests in Liaoning Province were found to be a carbon sink, with carbon stocks increasing from 63.0 TgC in 1980 to 120.9 TgC in 2010, reflecting an annual increase of 1.9 TgC. The average carbon density of forest biomass in the province has increased from 26.2 Mg ha(-1) in 1980 to 31.0 Mg ha(-1) in 2010. While the largest FBC occurred in middle-age forests, the average carbon density decreased in this age class during these three decades. The increase in forest carbon density resulted primarily from the increased area and carbon storage of mature forests. The relatively long age interval in each age class for slow-growing forest types increased the uncertainty of FBC estimates by CBM-forest type with age class, and further studies should devote more attention to the time span of age classes in establishing biomass expansion factors for use in CBM calculations.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Liu, Lanbo; Chao, Benjamin F; Sun, Wenke; Kuang, Weijia
2016-11-01
In this paper we report the assessment of the effect of the three-dimensional (3D) density heterogeneity in the mantle on Earth Orientation Parameters (EOP) (i.e., the polar motion, or PM, and the length of day, or LOD) in the tidal frequencies. The 3D mantle density model is estimated based upon a global S-wave velocity tomography model (S16U6L8) and the mineralogical knowledge derived from laboratory experiment. The lateral density variation is referenced against the Preliminary Reference Earth Model (PREM). Using this approach the effects of the heterogeneous mantle density variation in all three tidal frequencies (zonal long periods, tesseral diurnal, and sectorial semidiurnal) are estimated in both PM and LOD. When compared with mass or density perturbations originated on the earth's surface such as the oceanic and barometric changes, the heterogeneous mantle only contributes less than 10% of the total variation in PM and LOD in tidal frequencies. Nevertheless, including the 3D variation of the density in the mantle into account explained a substantial portion of the discrepancy between the observed signals in PM and LOD extracted from the lump-sum values based on continuous space geodetic measurement campaigns (e.g., CONT94) and the computed contribution from ocean tides as predicted by tide models derived from satellite altimetry observations (e.g., TOPEX/Poseidon). In other word, the difference of the two, at all tidal frequencies (long-periods, diurnals, and semi-diurnals) contains contributions of the lateral density heterogeneity of the mantle. Study of the effect of mantle density heterogeneity effect on torque-free earth rotation may provide useful constraints to construct the Reference Earth Model (REM), which is the next major objective in global geophysics research beyond PREM.
Mercury's lithospheric thickness and crustal density, as inferred from MESSENGER observations
NASA Astrophysics Data System (ADS)
James, P. B.; Mazarico, E.; Genova, A.; Smith, D. E.; Neumann, G. A.; Solomon, S. C.
2015-12-01
The gravity field and topography of Mercury measured by the MESSENGER spacecraft have provided insights into the thickness of the planet's elastic lithosphere, Te. We localized the HgM006 free-air gravity anomaly and gtmes_125v03 shape datasets to search for theoretical elastic thickness solutions that best fit a variety of localized coherence spectra between Bouguer gravity anomaly and topography. We adopted a crustal density of ρcrust =2700 kg m-3 for the Bouguer gravity correction, but density uncertainty did not markedly affect the elastic thickness estimates. A best-fit solution in the northern smooth plains (NSP) gives an elastic thickness of Te =30-60 km at the time of formation of topography for a range of ratios of top to bottom loading from 1 to 5. For a mechanical lithosphere with a thickness of ~2Te and a temperature of 1600 °C at the base, this solution is consistent with a geothermal gradient of 9-18 K km-1. A similar coherence analysis exterior to the NSP produces an elastic thickness estimate of Te =20-50 km, albeit with a poorer fit. Coherence in the northern hemisphere as a whole does not approach zero at any wavelength, because of the presence of variations in crustal thickness that are unassociated with elastic loading. The ratios and correlations of gravity and topography at intermediate wavelengths (harmonic degree l between 30 and 50) also constrain regional crustal densities. We localized gravity and topography with a moving Slepian taper and calculated regionally averaged crustal densities with the approximation ρcrust=Zl/(2πG), where Zl is the localized admittance and G is the gravitational constant. The only regional density estimates greater than 2000 kg m-3 for l=30 correspond to the NSP. Density estimates outside of the NSP were unreasonably low, even for highly porous crust. We attribute these low densities to the confounding effects of crustal thickness variations and Kaula filtering of the gravity dataset at the highest harmonic degrees, both of which tend to introduce a downward bias to crustal density estimation. An alternative analysis—which corrected for crustal thickness variability and was restricted to regions with gravity/topography coherence greater than 0.6—yielded an aggregate crustal density of ρcrust=2602 ± 470 kg m-3 for Mercury's high northern latitudes.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.
2011-12-01
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Duell, L. F. W.
1988-01-01
In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared to other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy budget measurements. Penman-combination potential ET estimates were determined to be unusable because they overestimated actual ET. Modification in the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 300 mm at a low-density scrub site to 1,100 mm at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied. (Author 's abstract)
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse
Estimating Small-Body Gravity Field from Shape Model and Navigation Data
NASA Technical Reports Server (NTRS)
Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam
2008-01-01
This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.
Nowak, J A; Forouzandeh, B; Nowak, J A
1997-09-01
Helicobacter pylori inhabits the gastric mucus layer of infected persons. A number of investigators have reported the feasibility of detecting H pylori in gastric mucus with polymerase chain reaction (PCR)-based methods. We have established the sensitivity of a simple PCR assay for detecting H pylori in gastric mucus samples and estimate that the density of H pylori organisms in the gastric mucus of untreated patients is approximately 107 to 108 organisms per milliliter. We have similarly estimated the analytic sensitivities of histologic examination and the CLOtest (TRI-MED Specialties, Overland Park, Kan) for detecting H pylori and calculate similar values for the numbers of organisms in the gastric mucus layer. Our data indicate that gastric mucus is a suitable specimen for the detection of H pylori in infected patients, and that PCR-based assays of gastric mucus are significantly more sensitive than histologic testing or the CLOtest for demonstration of H pylori infection.
Impact of density information on Rayleigh surface wave inversion results
NASA Astrophysics Data System (ADS)
Ivanov, Julian; Tsoflias, Georgios; Miller, Richard D.; Peterie, Shelby; Morton, Sarah; Xia, Jianghai
2016-12-01
We assessed the impact of density on the estimation of inverted shear-wave velocity (Vs) using the multi-channel analysis of surface waves (MASW) method. We considered the forward modeling theory, evaluated model sensitivity, and tested the effect of density information on the inversion of seismic data acquired in the Arctic. Theoretical review, numerical modeling and inversion of modeled and real data indicated that the density ratios between layers, not the actual density values, impact the determination of surface-wave phase velocities. Application on real data compared surface-wave inversion results using: a) constant density, the most common approach in practice, b) indirect density estimates derived from refraction compressional-wave velocity observations, and c) from direct density measurements in a borehole. The use of indirect density estimates reduced the final shear-wave velocity (Vs) results typically by 6-7% and the use of densities from a borehole reduced the final Vs estimates by 10-11% compared to those from assumed constant density. In addition to the improved absolute Vs accuracy, the resulting overall Vs changes were unevenly distributed laterally when viewed on a 2-D section leading to an overall Vs model structure that was more representative of the subsurface environment. It was observed that the use of constant density instead of increasing density with depth not only can lead to Vs overestimation but it can also create inaccurate model structures, such as a low-velocity layer. Thus, optimal Vs estimations can be best achieved using field estimates of subsurface density ratios.
Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador
Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew
2017-01-01
The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.
NASA Astrophysics Data System (ADS)
Slobbe, D. C.; Ditmar, P.; Lindenbergh, R. C.
2009-01-01
The focus of this paper is on the quantification of ongoing mass and volume changes over the Greenland ice sheet. For that purpose, we used elevation changes derived from the Ice, Cloud, and land Elevation Satellite (ICESat) laser altimetry mission and monthly variations of the Earth's gravity field as observed by the Gravity Recovery and Climate Experiment (GRACE) mission. Based on a stand alone processing scheme of ICESat data, the most probable estimate of the mass change rate from 2003 February to 2007 April equals -139 +/- 68 Gtonyr-1. Here, we used a density of 600+/-300 kgm-3 to convert the estimated elevation change rate in the region above 2000m into a mass change rate. For the region below 2000m, we used a density of 900+/-300 kgm-3. Based on GRACE gravity models from half 2002 to half 2007 as processed by CNES, CSR, DEOS and GFZ, the estimated mass change rate for the whole of Greenland ranges between -128 and -218Gtonyr-1. Most GRACE solutions show much stronger mass losses as obtained with ICESat, which might be related to a local undersampling of the mass loss by ICESat and uncertainties in the used snow/ice densities. To solve the problem of uncertainties in the snow and ice densities, two independent joint inversion concepts are proposed to profit from both GRACE and ICESat observations simultaneously. The first concept, developed to reduce the uncertainty of the mass change rate, estimates this rate in combination with an effective snow/ice density. However, it turns out that the uncertainties are not reduced, which is probably caused by the unrealistic assumption that the effective density is constant in space and time. The second concept is designed to convert GRACE and ICESat data into two totally new products: variations of ice volume and variations of snow volume separately. Such an approach is expected to lead to new insights in ongoing mass change processes over the Greenland ice sheet. Our results show for different GRACE solutions a snow volume change of -11 to 155km3yr-1 and an ice loss with a rate of -136 to -292km3yr-1.
Sandman, Antonia Nyström; Näslund, Johan; Gren, Ing-Marie; Norling, Karl
2018-05-05
Macrofaunal activities in sediments modify nutrient fluxes in different ways including the expression of species-specific functional traits and density-dependent population processes. The invasive polychaete genus Marenzelleria was first observed in the Baltic Sea in the 1980s. It has caused changes in benthic processes and affected the functioning of ecosystem services such as nutrient regulation. The large-scale effects of these changes are not known. We estimated the current Marenzelleria spp. wet weight biomass in the Baltic Sea to be 60-87 kton (95% confidence interval). We assessed the potential impact of Marenzelleria spp. on phosphorus cycling using a spatially explicit model, comparing estimates of expected sediment to water phosphorus fluxes from a biophysical model to ecologically relevant experimental measurements of benthic phosphorus flux. The estimated yearly net increases (95% CI) in phosphorous flux due to Marenzelleria spp. were 4.2-6.1 kton based on the biophysical model and 6.3-9.1 kton based on experimental data. The current biomass densities of Marenzelleria spp. in the Baltic Sea enhance the phosphorus fluxes from sediment to water on a sea basin scale. Although high densities of Marenzelleria spp. can increase phosphorus retention locally, such biomass densities are uncommon. Thus, the major effect of Marenzelleria seems to be a large-scale net decrease in the self-cleaning capacity of the Baltic Sea that counteracts human efforts to mitigate eutrophication in the region.
Mineral deposit densities for estimating mineral resources
Singer, Donald A.
2008-01-01
Estimates of numbers of mineral deposits are fundamental to assessing undiscovered mineral resources. Just as frequencies of grades and tonnages of well-explored deposits can be used to represent the grades and tonnages of undiscovered deposits, the density of deposits (deposits/area) in well-explored control areas can serve to represent the number of deposits. Empirical evidence presented here indicates that the processes affecting the number and quantity of resources in geological settings are very general across many types of mineral deposits. For podiform chromite, porphyry copper, and volcanogenic massive sulfide deposit types, the size of tract that geologically could contain the deposits is an excellent predictor of the total number of deposits. The number of mineral deposits is also proportional to the type’s size. The total amount of mineralized rock is also proportional to size of the permissive area and the median deposit type’s size. Regressions using these variables provide a means to estimate the density of deposits and the total amount of mineralization. These powerful estimators are based on analysis of ten different types of mineral deposits (Climax Mo, Cuban Mn, Cyprus massive sulfide, Franciscan Mn, kuroko massive sulfide, low-sulfide quartz-Au vein, placer Au, podiform Cr, porphyry Cu, and W vein) from 108 permissive control tracts around the world therefore generalizing across deposit types. Despite the diverse and complex geological settings of deposit types studied here, the relationships observed indicate universal controls on the accumulation and preservation of mineral resources that operate across all scales. The strength of the relationships (R 2=0.91 for density and 0.95 for mineralized rock) argues for their broad use. Deposit densities can now be used to provide a guideline for expert judgment or used directly for estimating the number of most kinds of mineral deposits.
Ge, Zhenpeng; Wang, Yi
2017-04-20
Molecular dynamics simulations of nanoparticles (NPs) are increasingly used to study their interactions with various biological macromolecules. Such simulations generally require detailed knowledge of the surface composition of the NP under investigation. Even for some well-characterized nanoparticles, however, this knowledge is not always available. An example is nanodiamond, a nanoscale diamond particle with surface dominated by oxygen-containing functional groups. In this work, we explore using the harmonic restraint method developed by Venable et al., to estimate the surface charge density (σ) of nanodiamonds. Based on the Gouy-Chapman theory, we convert the experimentally determined zeta potential of a nanodiamond to an effective charge density (σ eff ), and then use the latter to estimate σ via molecular dynamics simulations. Through scanning a series of nanodiamond models, we show that the above method provides a straightforward protocol to determine the surface charge density of relatively large (> ∼100 nm) NPs. Overall, our results suggest that despite certain limitation, the above protocol can be readily employed to guide the model construction for MD simulations, which is particularly useful when only limited experimental information on the NP surface composition is available to a modeler.
Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.
Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni
2018-06-15
Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.
NASA Astrophysics Data System (ADS)
Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.
2018-01-01
Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.
2018-06-01
We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.
Further developments in orbit ephemeris derived neutral density
NASA Astrophysics Data System (ADS)
Locke, Travis
There are a number of non-conservative forces acting on a satellite in low Earth orbit. The one which is the most dominant and also contains the most uncertainty is atmospheric drag. Atmospheric drag is directly proportional to atmospheric density, and the existing atmospheric density models do not accurately model the variations in atmospheric density. In this research, precision orbit ephemerides (POE) are used as input measurements in an optimal orbit determination scheme in order to estimate corrections to existing atmospheric density models. These estimated corrections improve the estimates of the drag experienced by a satellite and therefore provide an improvement in orbit determination and prediction as well as a better overall understanding of the Earth's upper atmosphere. The optimal orbit determination scheme used in this work includes using POE data as measurements in a sequential filter/smoother process using the Orbit Determination Tool Kit (ODTK) software. The POE derived density estimates are validated by comparing them with the densities derived from accelerometers on board the Challenging Minisatellite Payload (CHAMP) and the Gravity Recovery and Climate Experiment (GRACE). These accelerometer derived density data sets for both CHAMP and GRACE are available from Sean Bruinsma of the Centre National d'Etudes Spatiales (CNES). The trend in the variation of atmospheric density is compared quantitatively by calculating the cross correlation (CC) between the POE derived density values and the accelerometer derived density values while the magnitudes of the two data sets are compared by calculating the root mean square (RMS) values between the two. There are certain high frequency density variations that are observed in the accelerometer derived density data but not in the POE derived density data or any of the baseline density models. These high frequency density variations are typically small in magnitude compared to the overall day-night variation. However during certain time periods, such as when the satellite is near the terminator, the variations are on the same order of magnitude as the diurnal variations. These variations can also be especially prevalent during geomagnetic storms and near the polar cusps. One of the goals of this work is to see what affect these unmodeled high frequency variations have on orbit propagation. In order to see this effect, the orbits of CHAMP and GRACE are propagated during certain time periods using different sources of density data as input measurements (accelerometer, POE, HASDM, and Jacchia 1971). The resulting orbit propagations are all compared to the propagation using the accelerometer derived density data which is used as truth. The RMS and the maximum difference between the different propagations are analyzed in order to see what effect the unmodeled density variations have on orbit propagation. These results are also binned by solar and geomagnetic activity level. The primary input into the orbit determination scheme used to produce the POE derived density estimates is a precision orbit ephemeris file. This file contains position and velocity in-formation for the satellite based on GPS and SLR measurements. The values contained in these files are estimated values and therefore contain some level of error, typically thought to be around the 5-10 cm level. The other primary focus of this work is to evaluate the effect of adding different levels of noise (0.1 m, 0.5 m, 1 m, 10 m, and 100 m) to this raw ephemeris data file before it is input into the orbit determination scheme. The resulting POE derived density estimates for each level of noise are then compared with the accelerometer derived densities by computing the CC and RMS values between the data sets. These results are also binned by solar and geomagnetic activity level.
Imaging Breast Density: Established and Emerging Modalities1
Chen, Jeon-Hor; Gulsen, Gultekin; Su, Min-Ying
2015-01-01
Mammographic density has been proven as an independent risk factor for breast cancer. Women with dense breast tissue visible on a mammogram have a much higher cancer risk than women with little density. A great research effort has been devoted to incorporate breast density into risk prediction models to better estimate each individual’s cancer risk. In recent years, the passage of breast density notification legislation in many states in USA requires that every mammography report should provide information regarding the patient’s breast density. Accurate definition and measurement of breast density are thus important, which may allow all the potential clinical applications of breast density to be implemented. Because the two-dimensional mammography-based measurement is subject to tissue overlapping and thus not able to provide volumetric information, there is an urgent need to develop reliable quantitative measurements of breast density. Various new imaging technologies are being developed. Among these new modalities, volumetric mammographic density methods and three-dimensional magnetic resonance imaging are the most well studied. Besides, emerging modalities, including different x-ray–based, optical imaging, and ultrasound-based methods, have also been investigated. All these modalities may either overcome some fundamental problems related to mammographic density or provide additional density and/or compositional information. The present review article aimed to summarize the current established and emerging imaging techniques for the measurement of breast density and the evidence of the clinical use of these density methods from the literature. PMID:26692524
Thompson, Craig M.; Royle, J. Andrew; Garner, James D.
2012-01-01
Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.
NASA Astrophysics Data System (ADS)
Qie, G.; Wang, G.; Wang, M.
2016-12-01
Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images
Thapa, Kanchan; Kelly, Marcella J
2017-05-01
While there are numerous wildlife ecology studies in lowland areas of Nepal, there are no in-depth studies of the hilly Churia habitat even though it comprises 7642 km 2 of potential wildlife habitat across the Terai Arc. We investigated tiger, leopard and prey densities across this understudied habitat. Our camera trapping survey covered 536 km 2 of Churia and surrounding areas within Chitwan National Park (CNP). We used 161 trapping locations and accumulated 2097 trap-nights in a 60-day survey period during the winter season of 2010-2011. In addition, we walked 136 km over 81 different line transects using distance sampling to estimate prey density. We photographed 31 individual tigers, 28 individual leopards and 25 other mammalian species. Spatial capture-recapture methods resulted in lower density estimates for tigers, ranging from 2.3 to 2.9 tigers per 100 km 2 , than for leopards, which ranged from 3.3 to 5.1 leopards per 100 km 2 . In addition, leopard densities were higher in the core of the Churia compared to surrounding areas. We estimated 62.7 prey animals per 100 km 2 with forest ungulate prey (sambar, chital, barking deer and wild pig), accounting for 47% of the total. Based on prey availability, Churia habitat within CNP could potentially support 5.86 tigers per 100 km 2 but our density estimates were lower, perhaps indicating that the tiger population is below carrying capacity. Our results demonstrate that Churia habitat should not be ignored in conservation initiatives, but rather management efforts should focus on reducing human disturbance to support higher predator numbers. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
NASA Astrophysics Data System (ADS)
Obana, Y.; Maruyama, N.; Masahito, N.; Matsuoka, A.; Teramoto, M.; Nomura, R.; Fujimoto, A.; Tanaka, Y.; Shinohara, M.; Kasahara, Y.; Matsuda, S.; Kumamoto, A.; Tsuchiya, F.; Yoshizumi, M.; Shinohara, I.
2017-12-01
Earth's inner magnetosphere is a complex dynamical region of geo space comprising plasma populations with wide energy ranges, the plasmasphere, ring current, and radiation belts. They form a closely coupled system, thus, the plasmasphere is the lowest energy population in the inner magnetosphere, but the accurate prediction of the evolution of the plasmasphere is critical in understanding the dynamics of the inner magnetosphere, which include even the highest energy population, the radiation belts. In this study, we study plasmaspheric refilling following geomagnetic storms using data from ERG-MGF, ERG-PWE, RBSP-EMFISIS and Ground-based magnetometers. DC magnetic field data measured by ERG-MGF, RBSP-EMFISIS and ground-based magnetometers provides the frequency of the toroidal mode field line resonances. From this information, the equatorial plasma mass density is estimated by solving the MHD wave equation for suitable models of the magnetic field and the field line density distribution. ERG-PWE and RBSP-EMFISIS provide measurements of wave electric and magnetic field, thus we can estimate the local electron density from the plasma wave spectrograms by identifying narrow-band emission at the upper-hybrid resonance frequency. Furthermore, using Ionosphere Plasmasphere Electrodynamics Model (IPE), we calculate the plasmaspheric refilling rates and evaluate the relative contribution of various mechanisms (heating, neutral particle density, composition and wings, etc.) to the refilling rate.
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Estimating Allee dynamics before they can be observed: polar bears as a case study.
Molnár, Péter K; Lewis, Mark A; Derocher, Andrew E
2014-01-01
Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species.
Estimating Allee Dynamics before They Can Be Observed: Polar Bears as a Case Study
Molnár, Péter K.; Lewis, Mark A.; Derocher, Andrew E.
2014-01-01
Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species. PMID:24427306
Living on the edge: roe deer (Capreolus capreolus) density in the margins of its geographical range.
Valente, Ana M; Fonseca, Carlos; Marques, Tiago A; Santos, João P; Rodrigues, Rogério; Torres, Rita Tinoco
2014-01-01
Over the last decades roe deer (Capreolus capreolus) populations have increased in number and distribution throughout Europe. Such increases have profound impacts on ecosystems, both positive and negative. Therefore monitoring roe deer populations is essential for the appropriate management of this species, in order to achieve a balance between conservation and mitigation of the negative impacts. Despite being required for an effective management plan, the study of roe deer ecology in Portugal is at an early stage, and hence there is still a complete lack of knowledge of roe deer density within its known range. Distance sampling of pellet groups coupled with production and decay rates for pellet groups provided density estimates for roe deer in northeastern Portugal (Lombada National Hunting Area--LNHA, Serra de Montesinho--SM and Serra da Nogueira--SN; LNHA and SM located in Montesinho Natural Park). The estimated roe deer density using a stratified detection function was 1.23/100 ha for LNHA, 4.87/100 ha for SM and 4.25/100 ha in SN, with 95% confidence intervals (CI) of 0.68 to 2.21, 3.08 to 7.71 and 2.25 to 8.03, respectively. For the entire area, the estimated density was about 3.51/100 ha (95% CI - 2.26-5.45). This method can provide estimates of roe deer density, which will ultimately support management decisions. However, effective monitoring should be based on long-term studies that are able to detect population fluctuations. This study represents the initial phase of roe deer monitoring at the edge of its European range and intends to fill the gap in this species ecology, as the gathering of similar data over a number of years will provide the basis for stronger inferences. Monitoring should be continued, although the study area should be increased to evaluate the accuracy of estimates and assess the impact of management actions.
NASA Technical Reports Server (NTRS)
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
Motor unit number estimation based on high-density surface electromyography decomposition.
Peng, Yun; He, Jinbao; Yao, Bo; Li, Sheng; Zhou, Ping; Zhang, Yingchun
2016-09-01
To advance the motor unit number estimation (MUNE) technique using high density surface electromyography (EMG) decomposition. The K-means clustering convolution kernel compensation algorithm was employed to detect the single motor unit potentials (SMUPs) from high-density surface EMG recordings of the biceps brachii muscles in eight healthy subjects. Contraction forces were controlled at 10%, 20% and 30% of the maximal voluntary contraction (MVC). Achieved MUNE results and the representativeness of the SMUP pools were evaluated using a high-density weighted-average method. Mean numbers of motor units were estimated as 288±132, 155±87, 107±99 and 132±61 by using the developed new MUNE at 10%, 20%, 30% and 10-30% MVCs, respectively. Over 20 SMUPs were obtained at each contraction level, and the mean residual variances were lower than 10%. The new MUNE method allows a convenient and non-invasive collection of a large size of SMUP pool with great representativeness. It provides a useful tool for estimating the motor unit number of proximal muscles. The present new MUNE method successfully avoids the use of intramuscular electrodes or multiple electrical stimuli which is required in currently available MUNE techniques; as such the new MUNE method can minimize patient discomfort for MUNE tests. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Anopheles atroparvus density modeling using MODIS NDVI in a former malarious area in Portugal.
Lourenço, Pedro M; Sousa, Carla A; Seixas, Júlia; Lopes, Pedro; Novo, Maria T; Almeida, A Paulo G
2011-12-01
Malaria is dependent on environmental factors and considered as potentially re-emerging in temperate regions. Remote sensing data have been used successfully for monitoring environmental conditions that influence the patterns of such arthropod vector-borne diseases. Anopheles atroparvus density data were collected from 2002 to 2005, on a bimonthly basis, at three sites in a former malarial area in Southern Portugal. The development of the Remote Vector Model (RVM) was based upon two main variables: temperature and the Normalized Differential Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra satellite. Temperature influences the mosquito life cycle and affects its intra-annual prevalence, and MODIS NDVI was used as a proxy for suitable habitat conditions. Mosquito data were used for calibration and validation of the model. For areas with high mosquito density, the model validation demonstrated a Pearson correlation of 0.68 (p<0.05) and a modelling efficiency/Nash-Sutcliffe of 0.44 representing the model's ability to predict intra- and inter-annual vector density trends. RVM estimates the density of the former malarial vector An. atroparvus as a function of temperature and of MODIS NDVI. RVM is a satellite data-based assimilation algorithm that uses temperature fields to predict the intra- and inter-annual densities of this mosquito species using MODIS NDVI. RVM is a relevant tool for vector density estimation, contributing to the risk assessment of transmission of mosquito-borne diseases and can be part of the early warning system and contingency plans providing support to the decision making process of relevant authorities. © 2011 The Society for Vector Ecology.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Mixed effects modelling for glass category estimation from glass refractive indices.
Lucy, David; Zadora, Grzegorz
2011-10-10
520 Glass fragments were taken from 105 glass items. Each item was either a container, a window, or glass from an automobile. Each of these three classes of use are defined as glass categories. Refractive indexes were measured both before, and after a programme of re-annealing. Because the refractive index of each fragment could not in itself be observed before and after re-annealing, a model based approach was used to estimate the change in refractive index for each glass category. It was found that less complex estimation methods would be equivalent to the full model, and were subsequently used. The change in refractive index was then used to calculate a measure of the evidential value for each item belonging to each glass category. The distributions of refractive index change were considered for each glass category, and it was found that, possibly due to small samples, members of the normal family would not adequately model the refractive index changes within two of the use types considered here. Two alternative approaches to modelling the change in refractive index were used, one employed more established kernel density estimates, the other a newer approach called log-concave estimation. Either method when applied to the change in refractive index was found to give good estimates of glass category, however, on all performance metrics kernel density estimates were found to be slightly better than log-concave estimates, although the estimates from log-concave estimation prossessed properties which had some qualitative appeal not encapsulated in the selected measures of performance. These results and implications of these two methods of estimating probability densities for glass refractive indexes are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Curtis L. VanderSchaaf; Harold E. Burkhart
2010-01-01
Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...
Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys
Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik
2011-01-01
The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...
Sato, Tatsuhiko; Furusawa, Yoshiya
2012-10-01
Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.
Wong, Man Sing; Ho, Hung Chak; Yang, Lin; Shi, Wenzhong; Yang, Jinxin; Chan, Ta-Chien
2017-07-24
Dust events have long been recognized to be associated with a higher mortality risk. However, no study has investigated how prolonged dust events affect the spatial variability of mortality across districts in a downwind city. In this study, we applied a spatial regression approach to estimate the district-level mortality during two extreme dust events in Hong Kong. We compared spatial and non-spatial models to evaluate the ability of each regression to estimate mortality. We also compared prolonged dust events with non-dust events to determine the influences of community factors on mortality across the city. The density of a built environment (estimated by the sky view factor) had positive association with excess mortality in each district, while socioeconomic deprivation contributed by lower income and lower education induced higher mortality impact in each territory planning unit during a prolonged dust event. Based on the model comparison, spatial error modelling with the 1st order of queen contiguity consistently outperformed other models. The high-risk areas with higher increase in mortality were located in an urban high-density environment with higher socioeconomic deprivation. Our model design shows the ability to predict spatial variability of mortality risk during an extreme weather event that is not able to be estimated based on traditional time-series analysis or ecological studies. Our spatial protocol can be used for public health surveillance, sustainable planning and disaster preparation when relevant data are available.
Improving Frozen Precipitation Density Estimation in Land Surface Modeling
NASA Astrophysics Data System (ADS)
Sparrow, K.; Fall, G. M.
2017-12-01
The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in model derived estimates and GHCN-D observations were assessed using time-series graphs of 2016-2017 winter season SLR observations and climatological estimates, as well as calculating RMSE and variance between estimated and observed values.
Michalek, Lukas; Barner, Leonie; Barner-Kowollik, Christopher
2018-03-07
Well-defined polymer strands covalently tethered onto solid substrates determine the properties of the resulting functional interface. Herein, the current approaches to determine quantitative grafting densities are assessed. Based on a brief introduction into the key theories describing polymer brush regimes, a user's guide is provided to estimating maximum chain coverage and-importantly-examine the most frequently employed approaches for determining grafting densities, i.e., dry thickness measurements, gravimetric assessment, and swelling experiments. An estimation of the reliability of these determination methods is provided via carefully evaluating their assumptions and assessing the stability of the underpinning equations. A practical access guide for comparatively and quantitatively evaluating the reliability of a given approach is thus provided, enabling the field to critically judge experimentally determined grafting densities and to avoid the reporting of grafting densities that fall outside the physically realistic parameter space. The assessment is concluded with a perspective on the development of advanced approaches for determination of grafting density, in particular, on single-chain methodologies. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Settar, Abdelhakim; Abboudi, Saïd; Madani, Brahim; Nebbali, Rachid
2018-02-01
Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.
Miller, M.R.; Eadie, J. McA
2006-01-01
Breeding densities and migration periods of Common Snipe in Colorado were investigated in 1974-75. Sites studied were near Fort Collins and in North Park, both in north central Colorado; in the Yampa Valley in northwestern Colorado; and in the San Luis Valley in south central Colorado....Estimated densities of breeding snipe based on censuses conducted during May 1974 and 1975 were, by region: 1.3-1.7 snipe/ha near Fort Collins; 0.6 snipe/ha in North Park; 0.5-0.7 snipe/ha in the Yampa Valley; and 0.5 snipe/ha in the San Luis Valley. Overall mean densities were 06 and 0.7 snipe/ha in 1974 and 1975 respectively. On individual study sites, densities of snipe ranged from 0.2 to 2.1 snipe/ha. Areas with shallow, stable, discontinuous water levels, sparse, short vegetation, and soft organic soils had the highest densities.....Twenty-eight nests were located having a mean clutch size of 3.9 eggs. Estimated onset of incubation ranged from 2 May through 4 July. Most nests were initiated in May.....Spring migration extended from late March through early May. Highest densities of snipe were recorded in all regions during l&23 April. Fall migration was underway by early September and was completed by mid-October with highest densities occurring about the third week in September. High numbers of snipe noted in early August may have been early migrants or locally produced juveniles concentrating on favorable feeding areas.
NASA Astrophysics Data System (ADS)
Lecoutre, C.; Marre, S.; Garrabos, Y.; Beysens, D.; Hahn, I.
2018-05-01
Analyses of ground-based experiments on near-critical fluids to precisely determine their density can be hampered by several effects, especially the density stratification of the sample, the liquid wetting behavior at the cell walls, and a possible singular curvature of the "rectilinear" diameter of the density coexisting curve. For the latter effect, theoretical efforts have been made to understand the amplitude and shape of the critical hook of the density diameter, which depart from predictions from the so-called ideal lattice-gas model of the uniaxial 3D-Ising universality class. In order to optimize the observation of these subtle effects on the position and shape of the liquid-vapor meniscus in the particular case of SF6, we have designed and filled a cell that is highly symmetrized with respect to any median plane of the total fluid volume. In such a viewed quasi-perfect symmetrical fluid volume, the precise detection of the meniscus position and shape for different orientations of the cell with respect to the Earth's gravity acceleration field becomes a sensitive probe to estimate the cell mean density filling and to test the singular diameter effects. After integration of this cell in the ALI-R insert, we take benefit of the high optical and thermal performances of the DECLIC Engineering Model. Here we present the sensitive imaging method providing the precise ground-based SF6 benchmark data. From these data analysis it is found that the temperature dependence of the meniscus position does not reflect the expected critical hook in the rectilinear density diameter. Therefore the off-density criticality of the cell is accurately estimated, before near future experiments using the same ALI-R insert in the DECLIC facility already on-board the International Space Station.
Wolc, Anna; Stricker, Chris; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Preisinger, Rudolf; Habier, David; Fernando, Rohan; Garrick, Dorian J; Lamont, Susan J; Dekkers, Jack C M
2011-01-21
Genomic selection involves breeding value estimation of selection candidates based on high-density SNP genotypes. To quantify the potential benefit of genomic selection, accuracies of estimated breeding values (EBV) obtained with different methods using pedigree or high-density SNP genotypes were evaluated and compared in a commercial layer chicken breeding line. The following traits were analyzed: egg production, egg weight, egg color, shell strength, age at sexual maturity, body weight, albumen height, and yolk weight. Predictions appropriate for early or late selection were compared. A total of 2,708 birds were genotyped for 23,356 segregating SNP, including 1,563 females with records. Phenotypes on relatives without genotypes were incorporated in the analysis (in total 13,049 production records).The data were analyzed with a Reduced Animal Model using a relationship matrix based on pedigree data or on marker genotypes and with a Bayesian method using model averaging. Using a validation set that consisted of individuals from the generation following training, these methods were compared by correlating EBV with phenotypes corrected for fixed effects, selecting the top 30 individuals based on EBV and evaluating their mean phenotype, and by regressing phenotypes on EBV. Using high-density SNP genotypes increased accuracies of EBV up to two-fold for selection at an early age and by up to 88% for selection at a later age. Accuracy increases at an early age can be mostly attributed to improved estimates of parental EBV for shell quality and egg production, while for other egg quality traits it is mostly due to improved estimates of Mendelian sampling effects. A relatively small number of markers was sufficient to explain most of the genetic variation for egg weight and body weight.
Hybrid reconstruction of quantum density matrix: when low-rank meets sparsity
NASA Astrophysics Data System (ADS)
Li, Kezhi; Zheng, Kai; Yang, Jingbei; Cong, Shuang; Liu, Xiaomei; Li, Zhaokai
2017-12-01
Both the mathematical theory and experiments have verified that the quantum state tomography based on compressive sensing is an efficient framework for the reconstruction of quantum density states. In recent physical experiments, we found that many unknown density matrices in which people are interested in are low-rank as well as sparse. Bearing this information in mind, in this paper we propose a reconstruction algorithm that combines the low-rank and the sparsity property of density matrices and further theoretically prove that the solution of the optimization function can be, and only be, the true density matrix satisfying the model with overwhelming probability, as long as a necessary number of measurements are allowed. The solver leverages the fixed-point equation technique in which a step-by-step strategy is developed by utilizing an extended soft threshold operator that copes with complex values. Numerical experiments of the density matrix estimation for real nuclear magnetic resonance devices reveal that the proposed method achieves a better accuracy compared to some existing methods. We believe that the proposed method could be leveraged as a generalized approach and widely implemented in the quantum state estimation.
Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid
2016-01-01
In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.
NASA Astrophysics Data System (ADS)
Raju, Subramanian; Saibaba, Saroja
2016-09-01
The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.
Spatial heterogeneity in the carrying capacity of sika deer in Japan.
Iijima, Hayato; Ueno, Mayumi
2016-06-09
Carrying capacity is 1 driver of wildlife population dynamics. Although in previous studies carrying capacity was considered to be a fixed entity, it may differ among locations due to environmental variation. The factors underlying variability in carrying capacity, however, have rarely been examined. Here, we investigated spatial heterogeneity in the carrying capacity of Japanese sika deer ( Cervus nippon ) from 2005 to 2014 in Yamanashi Prefecture, central Japan (mesh with grid cells of 5.5×4.6 km) by state-space modeling. Both carrying capacity and density dependence differed greatly among cells. Estimated carrying capacities ranged from 1.34 to 98.4 deer/km 2 . According to estimated population dynamics, grid cells with larger proportions of artificial grassland and deciduous forest were subject to lower density dependence and higher carrying capacity. We conclude that population dynamics of ungulates may vary spatially through spatial variation in carrying capacity and that the density level for controlling ungulate abundance should be based on the current density level relative to the carrying capacity for each area.
Breeding population density and habitat use of Swainson's warblers in a Georgia floodplain forest
Wright, E.A.
2002-01-01
I examined density and habitat use of a Swainson's Warbler (Limnothlypis swainsonii) breeding population in Georgia. This songbird species is inadequately monitored, and may be declining due to anthropogenic alteration of floodplain forest breeding habitats. I used distance sampling methods to estimate density, finding 9.4 singing males/ha (CV = 0.298). Individuals were encountered too infrequently to produce a Iow-variance estimate, and distance sampling thus may be impracticable for monitoring this relatively rare species. I developed a set of multivariate habitat models using binary logistic regression techniques, based on measurement of 22 variables in 56 plots occupied by Swainson's Warblers and 110 unoccupied plots. Occupied areas were characterized by high stem density of cane (Arundinaria gigantea) and other shrub layer vegetation, and presence of abundant and accessible leaf litter. I recommend two habitat models, which correctly classified 87-89% of plots in cross-validation runs, for potential use in habitat assessment at other locations.
Wicke, Jason; Dumas, Genevieve A
2010-02-01
The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Gibbs measures based on 1d (an)harmonic oscillators as mean-field limits
NASA Astrophysics Data System (ADS)
Lewin, Mathieu; Nam, Phan Thành; Rougerie, Nicolas
2018-04-01
We prove that Gibbs measures based on 1D defocusing nonlinear Schrödinger functionals with sub-harmonic trapping can be obtained as the mean-field/large temperature limit of the corresponding grand-canonical ensemble for many bosons. The limit measure is supported on Sobolev spaces of negative regularity, and the corresponding density matrices are not trace-class. The general proof strategy is that of a previous paper of ours, but we have to complement it with Hilbert-Schmidt estimates on reduced density matrices.
NASA Astrophysics Data System (ADS)
Brus, Dick J.; van den Akker, Jan J. H.
2018-02-01
Although soil compaction is widely recognized as a soil threat to soil resources, reliable estimates of the acreage of overcompacted soil and of the level of soil compaction parameters are not available. In the Netherlands data on subsoil compaction were collected at 128 locations selected by stratified random sampling. A map showing the risk of subsoil compaction in five classes was used for stratification. Measurements of bulk density, porosity, clay content and organic matter content were used to compute the relative bulk density and relative porosity, both expressed as a fraction of a threshold value. A subsoil was classified as overcompacted if either the relative bulk density exceeded 1 or the relative porosity was below 1. The sample data were used to estimate the means of the two subsoil compaction parameters and the overcompacted areal fraction. The estimated global means of relative bulk density and relative porosity were 0.946 and 1.090, respectively. The estimated areal fraction of the Netherlands with overcompacted subsoils was 43 %. The estimates per risk map unit showed two groups of map units: a low-risk
group (units 1 and 2, covering only 4.6 % of the total area) and a high-risk
group (units 3, 4 and 5). The estimated areal fraction of overcompacted subsoil was 0 % in the low-risk unit and 47 % in the high-risk unit. The map contains no information about where overcompacted subsoils occur. This was caused by the poor association of the risk map units 3, 4 and 5 with the subsoil compaction parameters and subsoil overcompaction. This can be explained by the lack of time for recuperation.
CANDID: Companion Analysis and Non-Detection in Interferometric Data
NASA Astrophysics Data System (ADS)
Gallenne, A.; Mérand, A.; Kervella, P.; Monnier, J. D.; Schaefer, G. H.; Baron, F.; Breitfelder, J.; Le Bouquin, J. B.; Roettenbacher, R. M.; Gieren, W.; Pietrzynski, G.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.; Ridgway, S.; Kraus, S.
2015-05-01
CANDID finds faint companion around star in interferometric data in the OIFITS format. It allows systematically searching for faint companions in OIFITS data, and if not found, estimates the detection limit. The tool is based on model fitting and Chi2 minimization, with a grid for the starting points of the companion position. It ensures all positions are explored by estimating a-posteriori if the grid is dense enough, and provides an estimate of the optimum grid density.
1996-09-01
Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the
Semiautomatic estimation of breast density with DM-Scan software.
Martínez Gómez, I; Casals El Busto, M; Antón Guirao, J; Ruiz Perales, F; Llobet Azpitarte, R
2014-01-01
To evaluate the reproducibility of the calculation of breast density with DM-Scan software, which is based on the semiautomatic segmentation of fibroglandular tissue, and to compare it with the reproducibility of estimation by visual inspection. The study included 655 direct digital mammograms acquired using craniocaudal projections. Three experienced radiologists analyzed the density of the mammograms using DM-Scan, and the inter- and intra-observer agreement between pairs of radiologists for the Boyd and BI-RADS® scales were calculated using the intraclass correlation coefficient. The Kappa index was used to compare the inter- and intra-observer agreements with those obtained previously for visual inspection in the same set of images. For visual inspection, the mean interobserver agreement was 0,876 (95% CI: 0,873-0,879) on the Boyd scale and 0,823 (95% CI: 0,818-0,829) on the BI-RADS® scale. The mean intraobserver agreement was 0,813 (95% CI: 0,796-0,829) on the Boyd scale and 0,770 (95% CI: 0,742-0,797) on the BI-RADS® scale. For DM-Scan, the mean inter- and intra-observer agreement was 0,92, considerably higher than the agreement for visual inspection. The semiautomatic calculation of breast density using DM-Scan software is more reliable and reproducible than visual estimation and reduces the subjectivity and variability in determining breast density. Copyright © 2012 SERAM. Published by Elsevier Espana. All rights reserved.
Processing Satellite Data for Slant Total Electron Content Measurements
NASA Technical Reports Server (NTRS)
Stephens, Philip John (Inventor); Komjathy, Attila (Inventor); Wilson, Brian D. (Inventor); Mannucci, Anthony J. (Inventor)
2016-01-01
A method, system, and apparatus provide the ability to estimate ionospheric observables using space-borne observations. Space-borne global positioning system (GPS) data of ionospheric delay are obtained from a satellite. The space-borne GPS data are combined with ground-based GPS observations. The combination is utilized in a model to estimate a global three-dimensional (3D) electron density field.
Estimating topological properties of weighted networks from limited information
NASA Astrophysics Data System (ADS)
Gabrielli, Andrea; Cimini, Giulio; Garlaschelli, Diego; Squartini, Angelo
A typical problem met when studying complex systems is the limited information available on their topology, which hinders our understanding of their structural and dynamical properties. A paramount example is provided by financial networks, whose data are privacy protected. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here we develop a reconstruction method, based on statistical mechanics concepts, that exploits the empirical link density in a highly non-trivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems. Acknoweledgement to ``Growthcom'' ICT - EC project (Grant No: 611272) and ``Crisislab'' Italian Project.
A storm-time plasmasphere evolution study using data assimilation
NASA Astrophysics Data System (ADS)
Nikoukar, R.; Bust, G. S.; Bishop, R. L.; Coster, A. J.; Lemon, C.; Turner, D. L.; Roeder, J. L.
2017-12-01
In this work, we study the evolution of the Earth's plasmasphere during geomagnetic active periods using the Plasmasphere Data Assimilation (PDA) model. The total electron content (TEC) measurements from an extensive network of global ground-based GPS receivers as well as GPS receivers on-board Constellation Observing System for Meteorology, Ionosphere and Climate (COSMIC) satellites and Communications/Navigation Outage Forecasting System (C/NOFS) satellite are ingested into the model. Global Core Plasma model, which is an empirical plasmasphere model, is utilized as the background model. Based on the 3D-VAR optimization, the PDA assimilative model benefits from incorporation of regularization techniques to prevent non-physical altitudinal variation in density estimates due to the limited-angle observational geometry. This work focuses on the plasmapause location, plasmasphere erosion time scales and refilling rates during the main and recovery phases of geomagnetic storms as estimated from the PDA 3-dimensional global maps of electron density in the ionosphere/plasmasphere. The comparison between the PDA results with in-situ density measurements from THEMIS and Van Allen Probes, and the RCM-E first-principle model will be also presented.
Statistical estimation of femur micro-architecture using optimal shape and density predictors.
Lekadir, Karim; Hazrati-Marangalou, Javad; Hoogendoorn, Corné; Taylor, Zeike; van Rietbergen, Bert; Frangi, Alejandro F
2015-02-26
The personalization of trabecular micro-architecture has been recently shown to be important in patient-specific biomechanical models of the femur. However, high-resolution in vivo imaging of bone micro-architecture using existing modalities is still infeasible in practice due to the associated acquisition times, costs, and X-ray radiation exposure. In this study, we describe a statistical approach for the prediction of the femur micro-architecture based on the more easily extracted subject-specific bone shape and mineral density information. To this end, a training sample of ex vivo micro-CT images is used to learn the existing statistical relationships within the low and high resolution image data. More specifically, optimal bone shape and mineral density features are selected based on their predictive power and used within a partial least square regression model to estimate the unknown trabecular micro-architecture within the anatomical models of new subjects. The experimental results demonstrate the accuracy of the proposed approach, with average errors of 0.07 for both the degree of anisotropy and tensor norms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fouracre, David; Smith, Graham C.
2017-01-01
Policy development, implementation, and effective contingency response rely on a strong evidence base to ensure success and cost-effectiveness. Where this includes preventing the establishment or spread of zoonotic or veterinary diseases infecting companion cats and dogs, descriptions of the structure and density of the populations of these pets are useful. Similarly, such descriptions may help in supporting diverse fields of study such as; evidence-based veterinary practice, veterinary epidemiology, public health and ecology. As well as maps of where pets are, estimates of how many may rarely, or never, be seen by veterinarians and might not be appropriately managed in the event of a disease outbreak are also important. Unfortunately both sources of evidence are absent from the scientific and regulatory literatures. We make this first estimate of the structure and density of pet populations by using the most recent national population estimates of cats and dogs across Great Britain and subdividing these spatially, and categorically across ownership classes. For the spatial model we used the location and size of veterinary practises across GB to predict the local density of pets, using client travel time to define catchments around practises, and combined this with residential address data to estimate the rate of ownership. For the estimates of pets which may provoke problems in managing a veterinary or zoonotic disease we reviewed the literature and defined a comprehensive suite of ownership classes for cats and dogs, collated estimates of the sub-populations for each ownership class as well as their rates of interaction and produced a coherent scaled description of the structure of the national population. The predicted density of pets varied substantially, with the lowest densities in rural areas, and the highest in the centres of large cities where each species could exceed 2500 animals.km-2. Conversely, the number of pets per household showed the opposite relationship. Both qualitative and quantitative validation support key assumptions in the model structure and suggest the model is useful at predicting the populations of cats at geographical scales important for decision-making, although it also indicates where further research may improve model performance. In the event of an animal health crisis, it appears that almost all dogs could be brought under control rapidly. For cats, a substantial and unknown number might never be bought under control and would be less likely to receive veterinary support to facilitate surveillance and disease management; we estimate this to be at least 1.5 million cats. In addition, the lack of spare capacity to care for unowned cats in welfare organisations suggests that any increase in their rate of acquisition of cats, or any decrease in the rate of re-homing might provoke problems during a period of crisis. PMID:28403172
Landers, Mark N.; Ankcorn, Paul D.
2008-01-01
The influence of onsite septic wastewater-treatment systems (OWTS) on base-flow quantity needs to be understood to evaluate consumptive use of surface-water resources by OWTS. If the influence of OWTS on stream base flow can be measured and if the inflow to OWTS is known from water-use data, then water-budget approaches can be used to evaluate consumptive use. This report presents a method to evaluate the influence of OWTS on ground-water recharge and base-flow quantity. Base flow was measured in Gwinnett County, Georgia, during an extreme drought in October 2007 in 12 watersheds that have low densities of OWTS (22 to 96 per square mile) and 12 watersheds that have high densities (229 to 965 per square mile) of OWTS. Mean base-flow yield in the high-density OWTS watersheds is 90 percent greater than in the low-density OWTS watersheds. The density of OWTS is statistically significant (p-value less than 0.01) in relation to base-flow yield as well as specific conductance. Specific conductance of base flow increases with OWTS density, which may indicate influence from treated wastewater. The study results indicate considerable unexplained variation in measured base-flow yield for reasons that may include: unmeasured processes, a limited dataset, and measurement errors. Ground-water recharge from a high density of OWTS is assumed to be steady state from year to year so that the annual amount of increase in base flow from OWTS is expected to be constant. In dry years, however, OWTS contributions represent a larger percentage of natural base flow than in wet years. The approach of this study could be combined with water-use data and analyses to estimate consumptive use of OWTS.
Evaluation of trapping-web designs
Lukacs, P.M.; Anderson, D.R.; Burnham, K.P.
2005-01-01
The trapping web is a method for estimating the density and abundance of animal populations. A Monte Carlo simulation study is performed to explore performance of the trapping web for estimating animal density under a variety of web designs and animal behaviours. The trapping performs well when animals have home ranges, even if the home ranges are large relative to trap spacing. Webs should contain at least 90 traps. Trapping should continue for 5-7 occasions. Movement rates have little impact on density estimates when animals are confined to home ranges. Estimation is poor when animals do not have home ranges and movement rates are rapid. The trapping web is useful for estimating the density of animals that are hard to detect and occur at potentially low densities. ?? CSIRO 2005.
Whittington, Jesse; Sawaya, Michael A
2015-01-01
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.
Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering
NASA Technical Reports Server (NTRS)
Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)
2001-01-01
Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.
Do fossil plants signal palaeoatmospheric carbon dioxide concentration in the geological past?
McElwain, J. C.
1998-01-01
Fossil, subfossil, and herbarium leaves have been shown to provide a morphological signal of the atmospheric carbon dioxide environment in which they developed by means of their stomatal density and index. An inverse relationship between stomatal density/index and atmospheric carbon dioxide concentration has been documented for all the studies to date concerning fossil and subfossil material. Furthermore, this relationship has been demonstrated experimentally by growing plants under elevated and reducedcarbon dioxide concentrations. To date, the mechanism that controls the stomatal density response to atmospheric carbon dioxide concentration remains unknown. However, stomatal parameters of fossil plants have been successfully used as a proxy indicator of palaeo-carbon dioxide levels. This paper presents new estimates of palaeo-atmospheric carbon dioxide concentrations for the Middle Eocene (Lutetian), based on the stomatal ratios of fossil Lauraceae species from Bournemouth in England. Estimates of atmospheric carbon dioxide concentrations derived from stomatal data from plants of the Early Devonian, Late Carboniferous, Early Permian and Middle Jurassic ages are reviewed in the light of new data. Semi-quantitative palaeo-carbon dioxide estimates based on the stomatal ratio (a ratio of the stomatal index of a fossil plant to that of a selected nearest living equivalent) have in the past relied on the use of a Carboniferous standard. The application of a new standard based on the present-day carbon dioxide level is reported here for comparison. The resultant ranges of palaeo-carbon dioxide estimates made from standardized fossil stomatal ratio data are in good agreement with both carbon isotopic data from terrestrial and marine sources and long-term carbon cycle modelling estimates for all the time periods studied. These data indicate elevated atmospheric carbon dioxide concentrations during the Early Devonian, Middle Jurassic and Middle Eocene, and reduced concentrations during the Late Carboniferous and Early Permian. Such data are important in demonstrating the long-term responses of plants to changing carbon dioxide concentrations and in contributing to the database needed for general circulation model climatic analogues.
Duell, Lowell F. W.
1990-01-01
In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared with other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy-budget measurements. Penman-combination potential-ET estimates were determined to be unusable because they overestimated actual ET. Modification of the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods described in this report may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix of this report. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 301 millimeters at a low-density scrub site to 1,137 millimeters at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied.
Are camera surveys useful for assessing recruitment in white-tailed deer?
Chitwood, M. Colter; Lashley, Marcus A.; Kilgo, John C.; ...
2016-12-27
Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter in ungulate population dynamics, there is a growing need to test the effectiveness of camera surveys for assessing fawn recruitment. At Savannah River Site, South Carolina, we used six years of camera-based recruitment estimates (i.e. fawn:doe ratio) to predict concurrently collected annual radiotag-based survival estimates. The coefficientmore » of determination (R) was 0.445, indicating some support for the viability of cameras to reflect recruitment. Here, we added two years of data from Fort Bragg Military Installation, North Carolina, which improved R to 0.621 without accounting for site-specific variability. Also, we evaluated the correlation between year-to-year changes in recruitment and survival using the Savannah River Site data; R was 0.758, suggesting that camera-based recruitment could be useful as an indicator of the trend in survival. Because so few researchers concurrently estimate survival and camera-based recruitment, examining this relationship at larger spatial scales while controlling for numerous confounding variables remains difficult. We believe that future research should test the validity of our results from other areas with varying deer and camera densities, as site (e.g. presence of feral pigs Sus scrofa) and demographic (e.g. fawn age at time of camera survey) parameters may have a large influence on detectability. Until such biases are fully quantified, we urge researchers and managers to use caution when advocating the use of camera-based recruitment estimates.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chitwood, M. Colter; Lashley, Marcus A.; Kilgo, John C.
Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter in ungulate population dynamics, there is a growing need to test the effectiveness of camera surveys for assessing fawn recruitment. At Savannah River Site, South Carolina, we used six years of camera-based recruitment estimates (i.e. fawn:doe ratio) to predict concurrently collected annual radiotag-based survival estimates. The coefficientmore » of determination (R) was 0.445, indicating some support for the viability of cameras to reflect recruitment. Here, we added two years of data from Fort Bragg Military Installation, North Carolina, which improved R to 0.621 without accounting for site-specific variability. Also, we evaluated the correlation between year-to-year changes in recruitment and survival using the Savannah River Site data; R was 0.758, suggesting that camera-based recruitment could be useful as an indicator of the trend in survival. Because so few researchers concurrently estimate survival and camera-based recruitment, examining this relationship at larger spatial scales while controlling for numerous confounding variables remains difficult. We believe that future research should test the validity of our results from other areas with varying deer and camera densities, as site (e.g. presence of feral pigs Sus scrofa) and demographic (e.g. fawn age at time of camera survey) parameters may have a large influence on detectability. Until such biases are fully quantified, we urge researchers and managers to use caution when advocating the use of camera-based recruitment estimates.« less
González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría
2017-01-01
The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.
Estimating numbers of greater prairie-chickens using mark-resight techniques
Clifton, A.M.; Krementz, D.G.
2006-01-01
Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.
MODIS Based Estimation of Forest Aboveground Biomass in China.
Yin, Guodong; Zhang, Yuan; Sun, Yan; Wang, Tao; Zeng, Zhenzhong; Piao, Shilong
2015-01-01
Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS) dataset in a machine learning algorithm (the model tree ensemble, MTE). We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha-1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y-1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y-1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y-1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests.
MODIS Based Estimation of Forest Aboveground Biomass in China
Sun, Yan; Wang, Tao; Zeng, Zhenzhong; Piao, Shilong
2015-01-01
Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS) dataset in a machine learning algorithm (the model tree ensemble, MTE). We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha−1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y−1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y−1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y−1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests. PMID:26115195
A Feature-based Approach to Big Data Analysis of Medical Images
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685
A Feature-Based Approach to Big Data Analysis of Medical Images.
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
NASA Astrophysics Data System (ADS)
Makovníková, Jarmila; Širáň, Miloš; Houšková, Beata; Pálka, Boris; Jones, Arwyn
2017-10-01
Soil bulk density is one of the main direct indicators of soil health, and is an important aspect of models for determining agroecosystem services potential. By way of applying multi-regression methods, we have created a distributed prediction of soil bulk density used subsequently for topsoil carbon stock estimation. The soil data used for this study were from the Slovakian partial monitoring system-soil database. In our work, two models of soil bulk density in an equilibrium state, with different combinations of input parameters (soil particle size distribution and soil organic carbon content in %), have been created, and subsequently validated using a data set from 15 principal sampling sites of Slovakian partial monitoring system-soil, that were different from those used to generate the bulk density equations. We have made a comparison of measured bulk density data and data calculated by the pedotransfer equations against soil bulk density calculated according to equations recommended by Joint Research Centre Sustainable Resources for Europe. The differences between measured soil bulk density and the model values vary from -0.144 to 0.135 g cm-3 in the verification data set. Furthermore, all models based on pedotransfer functions give moderately lower values. The soil bulk density model was then applied to generate a first approximation of soil bulk density map for Slovakia using texture information from 17 523 sampling sites, and was subsequently utilised for topsoil organic carbon estimation.
Solichin Manuri; Hans-Erik Andersen; Robert J. McGaughey; Cris Brack
2017-01-01
The airborne lidar system (ALS) provides a means to efficiently monitor the status of remote tropical forests and continues to be the subject of intense evaluation. However, the cost of ALS acquisition canvary significantly depending on the acquisition parameters, particularly the return density (i.e., spatial resolution) of the lidar point cloud. This study assessed...
Modeling abundance effects in distance sampling
Royle, J. Andrew; Dawson, D.K.; Bates, S.
2004-01-01
Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
NASA Astrophysics Data System (ADS)
Loehman, R.; Heinsch, F. A.; Mills, J. N.; Wagoner, K.; Running, S.
2003-12-01
Recent predictive models for hantavirus pulmonary syndrome (HPS) have used remotely sensed spectral reflectance data to characterize risk areas with limited success. We present an alternative method using gross primary production (GPP) from the MODIS sensor to estimate the effects of biomass accumulation on population density of Peromyscus maniculatus (deer mouse), the principal reservoir species for Sin Nombre virus (SNV). The majority of diagnosed HPS cases in North America are attributed to SNV, which is transmitted to humans through inhalation of excretions and secretions from infected rodents. A logistic model framework is used to evaluate MODIS GPP, temperature, and precipitation as predictors of P. maniculatus density at established trapping sites across the western United States. Rodent populations are estimated using monthly minimum number alive (MNA) data for 2000 through 2002. Both local meteorological data from nearby weather stations and 1.25 degree x 1 degree gridded data from the NASA DAO were used in the regression model to determine the spatial sensitivity of the response. MODIS eight-day GPP data (1-km resolution) were acquired and binned to monthly average and monthly sum GPP for 3km x 3km grids surrounding each rodent trapping site. The use of MODIS GPP to forecast HPS risk may result in a marked improvement over past reflectance-based risk area characterizations. The MODIS GPP product provides a vegetation dynamics estimate that is unique to disease models, and targets the fundamental ecological processes responsible for increased rodent density and amplified disease risk.
NASA Astrophysics Data System (ADS)
Ponte, Aurélien L.; Klein, Patrice; Dunphy, Michael; Le Gentil, Sylvie
2017-03-01
The performance of a tentative method that disentangles the contributions of a low-mode internal tide on sea level from that of the balanced mesoscale eddies is examined using an idealized high resolution numerical simulation. This disentanglement is essential for proper estimation from sea level of the ocean circulation related to balanced motions. The method relies on an independent observation of the sea surface water density whose variations are 1/dominated by the balanced dynamics and 2/correlate with variations of potential vorticity at depth for the chosen regime of surface-intensified turbulence. The surface density therefore leads via potential vorticity inversion to an estimate of the balanced contribution to sea level fluctuations. The difference between instantaneous sea level (presumably observed with altimetry) and the balanced estimate compares moderately well with the contribution from the low-mode tide. Application to realistic configurations remains to be tested. These results aim at motivating further developments of reconstruction methods of the ocean dynamics based on potential vorticity dynamics arguments. In that context, they are particularly relevant for the upcoming wide-swath high resolution altimetric missions (SWOT).
A data-based conservation planning tool for Florida panthers
Murrow, Jennifer L.; Thatcher, Cindy A.; Van Manen, Frank T.; Clark, Joseph D.
2013-01-01
Habitat loss and fragmentation are the greatest threats to the endangered Florida panther (Puma concolor coryi). We developed a data-based habitat model and user-friendly interface so that land managers can objectively evaluate Florida panther habitat. We used a geographic information system (GIS) and the Mahalanobis distance statistic (D2) to develop a model based on broad-scale landscape characteristics associated with panther home ranges. Variables in our model were Euclidean distance to natural land cover, road density, distance to major roads, human density, amount of natural land cover, amount of semi-natural land cover, amount of permanent or semi-permanent flooded area–open water, and a cost–distance variable. We then developed a Florida Panther Habitat Estimator tool, which automates and replicates the GIS processes used to apply the statistical habitat model. The estimator can be used by persons with moderate GIS skills to quantify effects of land-use changes on panther habitat at local and landscape scales. Example applications of the tool are presented.
NASA Technical Reports Server (NTRS)
Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.
1995-01-01
To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.
Characterization of a maximum-likelihood nonparametric density estimator of kernel type
NASA Technical Reports Server (NTRS)
Geman, S.; Mcclure, D. E.
1982-01-01
Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).
Density estimation in aerial images of large crowds for automatic people counting
NASA Astrophysics Data System (ADS)
Herrmann, Christian; Metzler, Juergen
2013-05-01
Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, J.
Based on a compilation of three estimation approaches, the total nationwide population of wild pigs in the United States numbers approximately 6.3 million animals, with that total estimate ranging from 4.4 up to 11.3 million animals. The majority of these numbers (99 percent), which were encompassed by ten states (i.e., Alabama, Arkansas, California, Florida, Georgia, Louisiana, Mississippi, Oklahoma, South Carolina and Texas), were based on defined estimation methodologies (e.g., density estimates correlated to the total potential suitable wild pig habitat statewide, statewide harvest percentages, statewide agency surveys regarding wild pig distribution and numbers). In contrast to the pre-1990 estimates, nonemore » of these more recent efforts, collectively encompassing 99 percent of the total, were based solely on anecdotal information or speculation. To that end, one can defensibly state that the wild pigs found in the United States number in the millions of animals, with the nationwide population estimated to arguably vary from about four million up to about eleven million individuals.« less
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2014-09-30
172. McDonald, MA, Hildebrand, JA, and Mesnick, S (2009). Worldwide decline in tonal frequencies of blue whale songs . Endangered Species Research 9...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing
Kimura, Satoko; Akamatsu, Tomonari; Li, Songhai; Dong, Shouyue; Dong, Lijun; Wang, Kexiong; Wang, Ding; Arai, Nobuaki
2010-09-01
A method is presented to estimate the density of finless porpoises using stationed passive acoustic monitoring. The number of click trains detected by stereo acoustic data loggers (A-tag) was converted to an estimate of the density of porpoises. First, an automated off-line filter was developed to detect a click train among noise, and the detection and false-alarm rates were calculated. Second, a density estimation model was proposed. The cue-production rate was measured by biologging experiments. The probability of detecting a cue and the area size were calculated from the source level, beam patterns, and a sound-propagation model. The effect of group size on the cue-detection rate was examined. Third, the proposed model was applied to estimate the density of finless porpoises at four locations from the Yangtze River to the inside of Poyang Lake. The estimated mean density of porpoises in a day decreased from the main stream to the lake. Long-term monitoring during 466 days from June 2007 to May 2009 showed variation in the density 0-4.79. However, the density was fewer than 1 porpoise/km(2) during 94% of the period. These results suggest a potential gap and seasonal migration of the population in the bottleneck of Poyang Lake.
An Evaluation of the Pea Pod System for Assessing Body Composition of Moderately Premature Infants.
Forsum, Elisabet; Olhager, Elisabeth; Törnqvist, Caroline
2016-04-22
(1) BACKGROUND: Assessing the quality of growth in premature infants is important in order to be able to provide them with optimal nutrition. The Pea Pod device, based on air displacement plethysmography, is able to assess body composition of infants. However, this method has not been sufficiently evaluated in premature infants; (2) METHODS: In 14 infants in an age range of 3-7 days, born after 32-35 completed weeks of gestation, body weight, body volume, fat-free mass density (predicted by the Pea Pod software), and total body water (isotope dilution) were assessed. Reference estimates of fat-free mass density and body composition were obtained using a three-component model; (3) RESULTS: Fat-free mass density values, predicted using Pea Pod, were biased but not significantly (p > 0.05) different from reference estimates. Body fat (%), assessed using Pea Pod, was not significantly different from reference estimates. The biological variability of fat-free mass density was 0.55% of the average value (1.0627 g/mL); (4) CONCLUSION: The results indicate that the Pea Pod system is accurate for groups of newborn, moderately premature infants. However, more studies where this system is used for premature infants are needed, and we provide suggestions regarding how to develop this area.
Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G
2016-12-01
The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.
Polar bear aerial survey in the eastern Chukchi Sea: A pilot study
Evans, Thomas J.; Fischbach, Anthony S.; Schliebe, Scott L.; Manly, Bryan; Kalxdorff, Susanne B.; York, Geoff S.
2003-01-01
Alaska has two polar bear populations: the Southern Beaufort Sea population, shared with Canada, and the Chukchi/Bering Seas population, shared with Russia. Currently a reliable population estimate for the Chukchi/Bering Seas population does not exist. Land-based aerial and mark-recapture population surveys may not be possible in the Chukchi Sea because variable ice conditions, the limited range of helicopters, extremely large polar bear home ranges, and severe weather conditions may limit access to remote areas. Thus line-transect aerial surveys from icebreakers may be the best available tool to monitor this polar bear stock. In August 2000, a line-transect survey was conducted in the eastern Chukchi Sea and western Beaufort Sea from helicopters based on a U.S. Coast Guard icebreaker under the "Ship of Opportunity" program. The objectives of this pilot study were to estimate polar bear density in the eastern Chukchi and western Beaufort Seas and to assess the logistical feasibility of using ship-based aerial surveys to develop polar bear population estimates. Twenty-nine polar bears in 25 groups were sighted on 94 transects (8257 km). The density of bears was estimated as 1 bear per 147 km² (CV = 38%). Additional aerial surveys in late fall, using dedicated icebreakers, would be required to achieve the number of sightings, survey effort, coverage, and precision needed for more effective monitoring of population trends in the Chukchi Sea.
Factors determining yield and quality of illicit indoor cannabis (Cannabis spp.) production.
Vanhove, Wouter; Van Damme, Patrick; Meert, Natalie
2011-10-10
Judiciary currently faces difficulties in adequately estimating the yield of illicit indoor cannabis plantations. The latter data is required in penalization which is based on the profits gained. A full factorial experiment in which two overhead light intensities, two plant densities and four varieties were combined in the indoor cultivation of cannabis (Cannabis spp.) was used to reveal cannabis drug yield and quality under each of the factor combinations. Highest yield was found for the Super Skunk and Big Bud varieties which also exhibited the highest concentrations of Δ(9)-tetrahydrocannabinol (THC). Results show that plant density and light intensity are additive factors whereas the variety factor significantly interacts with both plant density and light intensity factors. Adequate estimations of yield of illicit, indoor cannabis plantations can only be made if upon seizure all factors considered in this study are accounted for. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
AXIALLY ORIENTED SECTIONS OF NUMMULITIDS: A TOOL TO INTERPRET LARGER BENTHIC FORAMINIFERAL DEPOSITS
Hohenegger, Johann; Briguglio, Antonino
2015-01-01
The “critical shear velocity” and “settling velocity” of foraminiferal shells are important parameters for determining hydrodynamic conditions during deposition of Nummulites banks. These can be estimated by determining the size, shape, and density of nummulitid shells examined in axial sections cut perpendicular to the bedding plane. Shell size and shape can be determined directly from the shell diameter and thickness, but density must be calculated indirectly from the thin section. Calculations using the half-tori method approximate shell densities by equalizing the chamber volume of each half whorl, based on the half whorl’s lumen area and its center of gravity. Results from this method yield the same lumen volumes produced empirically by micro-computed tomography. The derived hydrodynamic parameters help estimate the minimum flow velocities needed to entrain nummulitid tests and provide a potential tool to account for the nature of their accumulations. PMID:26166914
AXIALLY ORIENTED SECTIONS OF NUMMULITIDS: A TOOL TO INTERPRET LARGER BENTHIC FORAMINIFERAL DEPOSITS.
Hohenegger, Johann; Briguglio, Antonino
2012-04-01
The "critical shear velocity" and "settling velocity" of foraminiferal shells are important parameters for determining hydrodynamic conditions during deposition of Nummulites banks. These can be estimated by determining the size, shape, and density of nummulitid shells examined in axial sections cut perpendicular to the bedding plane. Shell size and shape can be determined directly from the shell diameter and thickness, but density must be calculated indirectly from the thin section. Calculations using the half-tori method approximate shell densities by equalizing the chamber volume of each half whorl, based on the half whorl's lumen area and its center of gravity. Results from this method yield the same lumen volumes produced empirically by micro-computed tomography. The derived hydrodynamic parameters help estimate the minimum flow velocities needed to entrain nummulitid tests and provide a potential tool to account for the nature of their accumulations.
NASA Astrophysics Data System (ADS)
Soja, Maciej J.; Blomberg, Erik; Ulander, Lars M. H.
2015-04-01
In this paper, a significant correlation between the HH/VV phase difference (polarisation phase difference, PPD) and the above-ground biomass (AGB) is observed for incidence angles above 30° in airborne P-band SAR data acquired over two boreal test sites in Sweden. A geometric model is used to explain the dependence of the AGB on tree height, stem radius, and tree number density, whereas a cylinder-over-ground model is used to explain the dependence of the PPD on the same three forest parameters. The models show that forest anisotropy need to be accounted for at P-band in order to obtain a linear relationship between the PPD and the AGB. An approach to the estimation of tree number density is proposed, based on a comparison between the modelled and observed PPDs.
Dorazio, Robert; Kumar, N. Samba; Royle, Andy; Gopalaswamy, Arjun M.
2017-01-01
Tigers predominantly prey on large ungulate species, such as sambar (Cervus unicolor), red deer (Cervus elaphus), gaur (Bos gaurus), banteng (Bos javanicus), chital (Axis axis), muntjac (Muntiacus muntjak), wild pig (Sus scrofa), and bearded pig (Sus barbatus). The density of a tiger population is strongly correlated with the density of such prey species (Karanth et al. 2004). In the absence of direct hunting of tigers, abundance of prey in an area is the key determinant of the “carrying capacity” of that area for tigers (Chap. 2). Accurate estimates of prey abundance are often needed to assess the potential number of tigers a conservation area can support.
Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina
2016-04-01
To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Yong, Kamuela E; Mubayi, Anuj; Kribs, Christopher M
2015-11-01
The parasite Trypanosoma cruzi, spread by triatomine vectors, affects over 100 mammalian species throughout the Americas, including humans, in whom it causes Chagas' disease. In the U.S., only a few autochthonous cases have been documented in humans, but prevalence is high in sylvatic hosts (primarily raccoons in the southeast and woodrats in Texas). The sylvatic transmission of T. cruzi is spread by the vector species Triatoma sanguisuga and Triatoma gerstaeckeri biting their preferred hosts and thus creating multiple interacting vector-host cycles. The goal of this study is to quantify the rate of contacts between different host and vector species native to Texas using an agent-based model framework. The contact rates, which represent bites, are required to estimate transmission coefficients, which can be applied to models of infection dynamics. In addition to quantitative estimates, results confirm host irritability (in conjunction with host density) and vector starvation thresholds and dispersal as determining factors for vector density as well as host-vector contact rates. Copyright © 2015 Elsevier B.V. All rights reserved.
García del Barrio, J M; Ortega, M; Vázquez De la Cueva, A; Elena-Rosselló, R
2006-08-01
This paper mainly aims to study the linear element influence on the estimation of vascular plant species diversity in five Mediterranean landscapes modeled as land cover patch mosaics. These landscapes have several core habitats and a different set of linear elements--habitat edges or ecotones, roads or railways, rivers, streams and hedgerows on farm land--whose plant composition were examined. Secondly, it aims to check plant diversity estimation in Mediterranean landscapes using parametric and non-parametric procedures, with two indices: Species richness and Shannon index. Land cover types and landscape linear elements were identified from aerial photographs. Their spatial information was processed using GIS techniques. Field plots were selected using a stratified sampling design according to relieve and tree density of each habitat type. A 50x20 m2 multi-scale sampling plot was designed for the core habitats and across the main landscape linear elements. Richness and diversity of plant species were estimated by comparing the observed field data to ICE (Incidence-based Coverage Estimator) and ACE (Abundance-based Coverage Estimator) non-parametric estimators. The species density, percentage of unique species, and alpha diversity per plot were significantly higher (p < 0.05) in linear elements than in core habitats. ICE estimate of number of species was 32% higher than of ACE estimate, which did not differ significantly from the observed values. Accumulated species richness in core habitats together with linear elements, were significantly higher than those recorded only in the core habitats in all the landscapes. Conversely, Shannon diversity index did not show significant differences.
NASA Technical Reports Server (NTRS)
Hajj, G. A.; Wilson, B. D.; Wang, C.; Pi, X.; Rosen, I. G.
2004-01-01
A three-dimensional (3-D) Global Assimilative Ionospheric Model (GAIM) is currently being developed by a joint University of Southern California and Jet Propulsion Laboratory (JPL) team. To estimate the electron density on a global grid, GAIM uses a first-principles ionospheric physics model and the Kalman filter as one of its possible estimation techniques.
A new study on the emission of EM waves from large EAS
NASA Technical Reports Server (NTRS)
Pathak, K. M.; Mazumdar, G. K. D.
1985-01-01
A method used in locating the core of individual cosmic ray showers is described. Using a microprocessor-based detecting system, the density distribution and hence, energy of each detected shower was estimated.
Automation of GIS-based population data-collection for transportation risk analysis
DOT National Transportation Integrated Search
1999-11-01
Estimation of the potential radiological risks associated with highway transport of radioactive : materials (RAM) requires input data describing population densities adjacent to all portions of : the route to be traveled. Previously, aggregated risks...
Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.
2016-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.
The instantaneous frequency rate spectrogram
NASA Astrophysics Data System (ADS)
Czarnecki, Krzysztof
2016-01-01
An accelerogram of the instantaneous phase of signal components referred to as an instantaneous frequency rate spectrogram (IFRS) is presented as a joint time-frequency distribution. The distribution is directly obtained by processing the short-time Fourier transform (STFT) locally. A novel approach to amplitude demodulation based upon the reassignment method is introduced as a useful by-product. Additionally, an estimator of energy density versus the instantaneous frequency rate (IFR) is proposed and referred to as the IFR profile. The energy density is estimated based upon both the classical energy spectrogram and the IFRS smoothened by the median filter. Moreover, the impact of an analyzing window width, additive white Gaussian noise and observation time is tested. Finally, the introduced method is used for the analysis of the acoustic emission of an automotive engine. The recording of the engine of a Lamborghini Gallardo is analyzed as an example.
Monitoring crop coefficient of orange orchards using energy balance and the remote sensed NDVI
NASA Astrophysics Data System (ADS)
Consoli, Simona; Cirelli, Giuseppe Luigi; Toscano, Attilio
2006-09-01
The structure of vegetation is paramount in regulating the exchange of mass and energy across the biosphereatmosphere interface. In particular, changes in vegetation density affected the partitioning of incoming solar energy into sensible and latent heat fluxes that may result in persistent drought through reductions in agricultural productivity and in the water resources availability. Limited research with citrus orchards has shown improvements to irrigation scheduling due to better water-use estimation and more appropriate timing of irrigation when crop coefficient (Kc) estimate, derived from remotely sensed multispectral vegetation indices (VIs), are incorporated into irrigation-scheduling algorithms. The purpose of this article is the application of an empirical reflectance-based model for the estimation of Kc and evapotranspiration fluxes (ET) using ground observations on climatic data and high-resolution VIs from ASTER TERRA satellite imagery. The remote sensed Kc data were used in developing the relationship with the normalized difference vegetation index (NDVI) for orange orchards during summer periods. Validation of remote sensed data on ET, Kc and vegetation features was deal through ground data observations and the resolution of the energy balance to derive latent heat flux density (λE), using measures of net radiation (Rn) and soil heat flux density (G) and estimate of sensible heat flux density (H) from high frequency temperature measurements (Surface Renewal technique). The chosen case study is that of an irrigation area covered by orange orchards located in Eastern Sicily, Italy) during the irrigation seasons 2005 and 2006.
Escos, J.; Alados, C.L.; Emlen, John M.
1994-01-01
A stage-class population model with density-feedback term included was used to identify the most critical parameters determining the population dynamics of female Spanish ibex (Capra pyrenaica) in southern Spain. A population in the Cazorla and Segura mountains is rapidly declining, but the eastern Sierra Nevada population is growing. The stable population density obtained using estimated values of kid and adult survival (0.49 and 0.87, respectively) and with fecundity equal to 0.367 in the absence of density feedback is 12.7 or 16.82 individuals/km2, based on a non-time-lagged and a time-lagged model, respectively. Given the maximum estimate of fecundity and an adult survival rate of 0.87, a kid survival rate of at least 0.41 is required to avoid extinction. At the minimum fecundity estimate, kid survival would have to exceed 0.52. Elasticities were used to estimate the influence of variation in life-cycle parameters on the intrinsic rate of increase. Adult survival is the most critical parameter, while fecundity and juvenile survival are less important. An increase in adult survival from 0.87 to 0.91 in the Cazorla and Segura mountains population would almost stabilize the population in the absence of stochastic variation, while the same increase in the Sierra Nevada population would yield population growth of 4–5% per annum. A reduction in adult survival to 0.83 results in population decline in both cases.
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.
Cao, Yuan; He, Haibo; Man, Hong
2012-08-01
In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.
The insight into the dark side - I. The pitfalls of the dark halo parameters estimation
NASA Astrophysics Data System (ADS)
Saburova, Anna S.; Kasparova, Anastasia V.; Katkov, Ivan Yu.
2016-12-01
We examined the reliability of estimates of pseudo-isothermal, Burkert and NFW dark halo parameters for the methods based on the mass-modelling of the rotation curves. To do it, we constructed the χ2 maps for the grid of the dark matter halo parameters for a sample of 14 disc galaxies with high-quality rotation curves from THINGS. We considered two variants of models in which: (a) the mass-to-light ratios of disc and bulge were taken as free parameters, (b) the mass-to-light ratios were fixed in a narrow range according to the models of stellar populations. To reproduce the possible observational features of the real galaxies, we made tests showing that the parameters of the three halo types change critically in the cases of a lack of kinematic data in the central or peripheral areas and for different spatial resolutions. We showed that due to the degeneracy between the central densities and the radial scales of the dark haloes there are considerable uncertainties of their concentrations estimates. Due to this reason, it is also impossible to draw any firm conclusion about universality of the dark halo column density based on mass-modelling of even a high-quality rotation curve. The problem is not solved by fixing the density of baryonic matter. In contrast, the estimates of dark halo mass within optical radius are much more reliable. We demonstrated that one can evaluate successfully the halo mass using the pure best-fitting method without any restrictions on the mass-to-light ratios.
A long-term evaluation of biopsy darts and DNA to estimate cougar density
Beausoleil, Richard A.; Clark, Joseph D.; Maletzke, Benjamin T.
2016-01-01
Accurately estimating cougar (Puma concolor) density is usually based on long-term research consisting of intensive capture and Global Positioning System collaring efforts and may cost hundreds of thousands of dollars annually. Because wildlife agency budgets rarely accommodate this approach, most infer cougar density from published literature, rely on short-term studies, or use hunter harvest data as a surrogate in their jurisdictions; all of which may limit accuracy and increase risk of management actions. In an effort to develop a more cost-effective long-term strategy, we evaluated a research approach using citizen scientists with trained hounds to tree cougars and collect tissue samples with biopsy darts. We then used the DNA to individually identify cougars and employed spatially explicit capture–recapture models to estimate cougar densities. Overall, 240 tissue samples were collected in northeastern Washington, USA, producing 166 genotypes (including recaptures and excluding dependent kittens) of 133 different cougars (8-25/yr) from 2003 to 2011. Mark–recapture analyses revealed a mean density of 2.2 cougars/100 km2 (95% CI=1.1-4.3) and stable to decreasing population trends (β=-0.048, 95% CI=-0.106–0.011) over the 9 years of study, with an average annual harvest rate of 14% (range=7-21%). The average annual cost per year for field sampling and genotyping was US$11,265 ($422.24/sample or $610.73/successfully genotyped sample). Our results demonstrated that long-term biopsy sampling using citizen scientists can increase capture success and provide reliable cougar-density information at a reasonable cost.
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; Christen, J. Andrés; Bennett, K. D.; Reimer, Paula J.
2018-05-01
Reliable chronologies are essential for most Quaternary studies, but little is known about how age-depth model choice, as well as dating density and quality, affect the precision and accuracy of chronologies. A meta-analysis suggests that most existing late-Quaternary studies contain fewer than one date per millennium, and provide millennial-scale precision at best. We use existing and simulated sediment cores to estimate what dating density and quality are required to obtain accurate chronologies at a desired precision. For many sites, a doubling in dating density would significantly improve chronologies and thus their value for reconstructing and interpreting past environmental changes. Commonly used classical age-depth models stop becoming more precise after a minimum dating density is reached, but the precision of Bayesian age-depth models which take advantage of chronological ordering continues to improve with more dates. Our simulations show that classical age-depth models severely underestimate uncertainty and are inaccurate at low dating densities, and also perform poorly at high dating densities. On the other hand, Bayesian age-depth models provide more realistic precision estimates, including at low to average dating densities, and are much more robust against dating scatter and outliers. Indeed, Bayesian age-depth models outperform classical ones at all tested dating densities, qualities and time-scales. We recommend that chronologies should be produced using Bayesian age-depth models taking into account chronological ordering and based on a minimum of 2 dates per millennium.
Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina
2016-09-01
Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.
Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.
2016-01-01
Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418
Annual measurements of gain and loss in aboveground carbon density
NASA Astrophysics Data System (ADS)
Baccini, A.; Walker, W. S.; Carvalho, L.; Farina, M.; Sulla-menashe, D. J.; Houghton, R. A.
2017-12-01
Tropical forests hold large stores of carbon, but their net carbon balance is uncertain. Land use and land-cover change (LULCC) are believed to release between 0.81 and 1.14 PgC yr-1, while intact native forests are thought to be a net carbon sink of approximately the same magnitude. Reducing the uncertainty of these estimates is not only fundamental to the advancement of carbon cycle science but is also of increasing relevance to national and international policies designed to reduce emissions from deforestation and forest degradation (e.g., REDD+). Contemporary approaches to estimating the net carbon balance of tropical forests rely on changes in forest area between two periods, typically derived from satellite data, together with information on average biomass density. These approaches tend to capture losses in biomass due to deforestation (i.e., wholesale stand removals) but are limited in their sensitivity to forest degradation (e.g., selective logging or single-tree removals), which can account for additional biomass losses on the order of 47-75% of deforestation. Furthermore, while satellite-based estimates of forest area loss have been used successfully to estimate associated carbon losses, few such analyses have endeavored to determine the rate of carbon sequestration in growing forests. Here we use 12 years (2003-2014) of pantropical satellite data to quantify net annual changes in the aboveground carbon density of woody vegetation (MgC ha-1yr-1), providing direct, measurement-based evidence that the world's tropical forests are a net carbon source of 425.2 ± 92.0 Tg C yr-1. This net release of carbon consists of losses of 861.7 ± 80.2 Tg C yr-1 and gains of -436.5 ± 31.0 Tg C yr-1 . Gains result from forest growth; losses result from reductions in forest area due to deforestation and from reductions in biomass density within standing forests (degradation), with the latter accounting for 68.9% of overall losses. Our findings advance previous research by providing novel, annual measurements of carbon losses and gains, from forest loss, degradation, and growth, with reduced uncertainty that stems from an unconventional shift in emphasis away from classifications of land area change toward direct estimation of carbon density dynamics.
Rotella, J.J.; Link, W.A.; Nichols, J.D.; Hadley, G.L.; Garrott, R.A.; Proffitt, K.M.
2009-01-01
Much of the existing literature that evaluates the roles of density-dependent and density-independent factors on population dynamics has been called into question in recent years because measurement errors were not properly dealt with in analyses. Using state-space models to account for measurement errors, we evaluated a set of competing models for a 22-year time series of mark-resight estimates of abundance for a breeding population of female Weddell seals (Leptonychotes weddellii) studied in Erebus Bay, Antarctica. We tested for evidence of direct density dependence in growth rates and evaluated whether equilibrium population size was related to seasonal sea-ice extent and the Southern Oscillation Index (SOI). We found strong evidence of negative density dependence in annual growth rates for a population whose estimated size ranged from 438 to 623 females during the study. Based on Bayes factors, a density-dependence-only model was favored over models that also included en! vironmental covariates. According to the favored model, the population had a stationary distribution with a mean of 497 females (SD = 60.5), an expected growth rate of 1.10 (95% credible interval 1.08-1.15) when population size was 441 females, and a rate of 0.90 (95% credible interval 0.87-0.93) for a population of 553 females. A model including effects of SOI did receive some support and indicated a positive relationship between SOI and population size. However, effects of SOI were not large, and including the effect did not greatly reduce our estimate of process variation. We speculate that direct density dependence occurred because rates of adult survival, breeding, and temporary emigration were affected by limitations on per capita food resources and space for parturition and pup-rearing. To improve understanding of the relative roles of various demographic components and their associated vital rates to population growth rate, mark-recapture methods can be applied that incorporate both environmental covariates and the seal abundance estimates that were developed here. An improved understanding of why vital rates change with changing population abundance will only come as we develop a better understanding of the processes affecting marine food resources in the Southern Ocean.
mBEEF-vdW: Robust fitting of error estimation density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...
2016-06-15
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog
2012-01-01
Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...
NASA Astrophysics Data System (ADS)
Wüst, Sabine; Schmidt, Carsten; Bittner, Michael; Silber, Israel; Price, Colin; Yee, Jeng-Hwa; Mlynczak, Martin G.; Russell, James M.
2017-03-01
In this study, we present an analysis of approximately four years of nightly temperature data, acquired with the OH-spectrometer GRIPS 10 (GRound based Infrared P-branch Spectrometer), which was installed in Tel Aviv (32.11°N, 34.8°E), Israel in November 2011 for routine measurements. As our instrument does not give any height information, we use TIMED-SABER data in order to answer the question concerning the height region our measurement technique exactly addresses. For the first time, we estimate the density of wave potential energy for periods between some minutes and some hours for this station. These values are typical for gravity waves. Since GRIPS measurements do not currently provide vertically resolved data, the Brunt-Väisälä frequency, which is needed for the estimation of potential energy density, is calculated using TIMED-SABER measurements. The monthly mean density of wave potential energy is presented for periods shorter and longer than 60 min. For the winter months (November, December, and January), the data base allows the calculation of a seasonal mean for the different years. This publication is the companion paper to Silber et al. (2016). Here, we focus on oscillations with shorter periods.
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Problems with sampling desert tortoises: A simulation analysis based on field data
Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.
2005-01-01
The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.
Nicolas, Xavier; Djebli, Nassim; Rauch, Clémence; Brunet, Aurélie; Hurbin, Fabrice; Martinez, Jean-Marie; Fabre, David
2018-05-03
Alirocumab, a human monoclonal antibody against proprotein convertase subtilisin/kexin type 9 (PCSK9), significantly lowers low-density lipoprotein cholesterol levels. This analysis aimed to develop and qualify a population pharmacokinetic/pharmacodynamic model for alirocumab based on pooled data obtained from 13 phase I/II/III clinical trials. From a dataset of 2799 individuals (14,346 low-density lipoprotein-cholesterol values), individual pharmacokinetic parameters from the population pharmacokinetic model presented in Part I of this series were used to estimate alirocumab concentrations. As a second step, we then developed the current population pharmacokinetic/pharmacodynamic model using an indirect response model with a Hill coefficient, parameterized with increasing low-density lipoprotein cholesterol elimination, to relate alirocumab concentrations to low-density lipoprotein cholesterol values. The population pharmacokinetic/pharmacodynamic model allowed the characterization of the pharmacokinetic/pharmacodynamic properties of alirocumab in the target population and estimation of individual low-density lipoprotein cholesterol levels and derived pharmacodynamic parameters (the maximum decrease in low-density lipoprotein cholesterol values from baseline and the difference between baseline low-density lipoprotein cholesterol and the pre-dose value before the next alirocumab dose). Significant parameter-covariate relationships were retained in the model, with a total of ten covariates (sex, age, weight, free baseline PCSK9, total time-varying PCSK9, concomitant statin administration, total baseline PCSK9, co-administration of high-dose statins, disease status) included in the final population pharmacokinetic/pharmacodynamic model to explain between-subject variability. Nevertheless, the high number of covariates included in the model did not have a clinically meaningful impact on model-derived pharmacodynamic parameters. This model successfully allowed the characterization of the population pharmacokinetic/pharmacodynamic properties of alirocumab in its target population and the estimation of individual low-density lipoprotein cholesterol levels.
Reliability and precision of pellet-group counts for estimating landscape-level deer density
David S. deCalesta
2013-01-01
This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...
Estimating stem volume and biomass of Pinus koraiensis using LiDAR data.
Kwak, Doo-Ahn; Lee, Woo-Kyun; Cho, Hyun-Kook; Lee, Seung-Ho; Son, Yowhan; Kafatos, Menas; Kim, So-Ra
2010-07-01
The objective of this study was to estimate the stem volume and biomass of individual trees using the crown geometric volume (CGV), which was extracted from small-footprint light detection and ranging (LiDAR) data. Attempts were made to analyze the stem volume and biomass of Korean Pine stands (Pinus koraiensis Sieb. et Zucc.) for three classes of tree density: low (240 N/ha), medium (370 N/ha), and high (1,340 N/ha). To delineate individual trees, extended maxima transformation and watershed segmentation of image processing methods were applied, as in one of our previous studies. As the next step, the crown base height (CBH) of individual trees has to be determined; information for this was found in the LiDAR point cloud data using k-means clustering. The LiDAR-derived CGV and stem volume can be estimated on the basis of the proportional relationship between the CGV and stem volume. As a result, low tree-density plots had the best performance for LiDAR-derived CBH, CGV, and stem volume (R (2) = 0.67, 0.57, and 0.68, respectively) and accuracy was lowest for high tree-density plots (R (2) = 0.48, 0.36, and 0.44, respectively). In the case of medium tree-density plots accuracy was R (2) = 0.51, 0.52, and 0.62, respectively. The LiDAR-derived stem biomass can be predicted from the stem volume using the wood basic density of coniferous trees (0.48 g/cm(3)), and the LiDAR-derived above-ground biomass can then be estimated from the stem volume using the biomass conversion and expansion factors (BCEF, 1.29) proposed by the Korea Forest Research Institute (KFRI).
NASA Astrophysics Data System (ADS)
Varade, D. M.; Dikshit, O.
2017-12-01
Modeling and forecasting of snowmelt runoff are significant for understanding the hydrological processes in the cryosphere which requires timely information regarding snow physical properties such as liquid water content and density of snow in the topmost layer of the snowpack. Both the seasonal runoffs and avalanche forecasting are vastly dependent on the inherent physical characteristics of the snowpack which are conventionally measured by field surveys in difficult terrains at larger impending costs and manpower. With advances in remote sensing technology and the increase in the availability of satellite data, the frequency and extent of these surveys could see a declining trend in future. In this study, we present a novel approach for estimating snow wetness and snow density using visible and infrared bands that are available with most multi-spectral sensors. We define a trapezoidal feature space based on the spectral reflectance in the near infrared band and the Normalized Differenced Snow Index (NDSI), referred to as NIR-NDSI space, where dry snow and wet snow are observed in the left diagonal upper and lower right corners, respectively. The corresponding pixels are extracted by approximating the dry and wet edges which are used to develop a linear physical model to estimate snow wetness. Snow density is then estimated using the modeled snow wetness. Although the proposed approach has used Sentinel-2 data, it can be extended to incorporate data from other multi-spectral sensors. The estimated values for snow wetness and snow density show a high correlation with respect to in-situ measurements. The proposed model opens a new avenue for remote sensing of snow physical properties using multi-spectral data, which were limited in the literature.
Effects of LiDAR point density and landscape context on the retrieval of urban forest biomass
NASA Astrophysics Data System (ADS)
Singh, K. K.; Chen, G.; McCarter, J. B.; Meentemeyer, R. K.
2014-12-01
Light Detection and Ranging (LiDAR), as an alternative to conventional optical remote sensing, is being increasingly used to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and better data accuracies, which however pose challenges to the procurement and processing of LiDAR data for large-area assessments. Reducing point density cuts data acquisition costs and overcome computational challenges for broad-scale forest management. However, how does that impact the accuracy of biomass estimation in an urban environment containing a great level of anthropogenic disturbances? The main goal of this study is to evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing regions of Charlotte, North Carolina, USA. We used multiple linear regression to establish the statistical relationship between field-measured biomass and predictor variables (PVs) derived from LiDAR point clouds with varying densities. We compared the estimation accuracies between the general Urban Forest models (no discrimination of forest type) and the Forest Type models (evergreen, deciduous, and mixed), which was followed by quantifying the degree to which landscape context influenced biomass estimation. The explained biomass variance of Urban Forest models, adjusted R2, was fairly consistent across the reduced point densities with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models using two representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, signifying the distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest biomass assessment without compromising the accuracy of estimation, which may further be improved using development density.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Topics in global convergence of density estimates
NASA Technical Reports Server (NTRS)
Devroye, L.
1982-01-01
The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.