Sample records for density estimates obtained

  1. Brain Tissue Compartment Density Estimated Using Diffusion-Weighted MRI Yields Tissue Parameters Consistent With Histology

    PubMed Central

    Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi

    2015-01-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639

  2. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  3. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  4. Comparing methods to estimate Reineke’s maximum size-density relationship species boundary line slope

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2010-01-01

    Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...

  5. Features of the normal choriocapillaris with OCT-angiography: Density estimation and textural properties.

    PubMed

    Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo

    2017-01-01

    The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.

  6. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less

  7. Ring profiler: a new method for estimating tree-ring density for improved estimates of carbon storage

    Treesearch

    David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog

    2012-01-01

    Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...

  8. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  9. Deep sea animal density and size estimated using a Dual-frequency IDentification SONar (DIDSON) offshore the island of Hawaii

    NASA Astrophysics Data System (ADS)

    Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.

    2018-01-01

    Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.

  10. Evaluation of line transect sampling based on remotely sensed data from underwater video

    USGS Publications Warehouse

    Bergstedt, R.A.; Anderson, D.R.

    1990-01-01

    We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.

  11. Improving the Navy’s Passive Underwater Acoustic Monitoring of Marine Mammal Populations

    DTIC Science & Technology

    2014-09-30

    species using passive acoustic monitoring, with application to obtaining density estimates of transiting humpback whale populations in the Southern...of the density estimates, 3) to apply the numerical modeling methods for humpback whale vocalizations to understand distortions caused by...obtained. The specific approach being followed to accomplish objectives 1-4 above is listed below. 1) Detailed numerical modeling of humpback whale

  12. An adaptive technique for estimating the atmospheric density profile during the AE mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.

    1973-01-01

    A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.

  13. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.

  14. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  15. Estimations of population density for selected periods between the Neolithic and AD 1800.

    PubMed

    Zimmermann, Andreas; Hilpert, Johanna; Wendt, Karl Peter

    2009-04-01

    Abstract We describe a combination of methods applied to obtain reliable estimations of population density using archaeological data. The combination is based on a hierarchical model of scale levels. The necessary data and methods used to obtain the results are chosen so as to define transfer functions from one scale level to another. We apply our method to data sets from western Germany that cover early Neolithic, Iron Age, Roman, and Merovingian times as well as historical data from AD 1800. Error margins and natural and historical variability are discussed. Our results for nonstate societies are always lower than conventional estimations compiled from the literature, and we discuss the reasons for this finding. At the end, we compare the calculated local and global population densities with other estimations from different parts of the world.

  16. Practical technique to quantify small, dense low-density lipoprotein cholesterol using dynamic light scattering

    NASA Astrophysics Data System (ADS)

    Trirongjitmoah, Suchin; Iinaga, Kazuya; Sakurai, Toshihiro; Chiba, Hitoshi; Sriyudthsak, Mana; Shimizu, Koichi

    2016-04-01

    Quantification of small, dense low-density lipoprotein (sdLDL) cholesterol is clinically significant. We propose a practical technique to estimate the amount of sdLDL cholesterol using dynamic light scattering (DLS). An analytical solution in a closed form has newly been obtained to estimate the weight fraction of one species of scatterers in the DLS measurement of two species of scatterers. Using this solution, we can quantify the sdLDL cholesterol amount from the amounts of the low-density lipoprotein cholesterol and the high-density lipoprotein (HDL) cholesterol, which are commonly obtained through clinical tests. The accuracy of the proposed technique was confirmed experimentally using latex spheres with known size distributions. The applicability of the proposed technique was examined using samples of human blood serum. The possibility of estimating the sdLDL amount using the HDL data was demonstrated. These results suggest that the quantitative estimation of sdLDL amounts using DLS is feasible for point-of-care testing in clinical practice.

  17. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  18. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  19. Camera traps and activity signs to estimate wild boar density and derive abundance indices.

    PubMed

    Massei, Giovanna; Coats, Julia; Lambert, Mark Simon; Pietravalle, Stephane; Gill, Robin; Cowan, Dave

    2018-04-01

    Populations of wild boar and feral pigs are increasing worldwide, in parallel with their significant environmental and economic impact. Reliable methods of monitoring trends and estimating abundance are needed to measure the effects of interventions on population size. The main aims of this study, carried out in five English woodlands were: (i) to compare wild boar abundance indices obtained from camera trap surveys and from activity signs; and (ii) to assess the precision of density estimates in relation to different densities of camera traps. For each woodland, we calculated a passive activity index (PAI) based on camera trap surveys, rooting activity and wild boar trails on transects, and estimated absolute densities based on camera trap surveys. PAIs obtained using different methods showed similar patterns. We found significant between-year differences in abundance of wild boar using PAIs based on camera trap surveys and on trails on transects, but not on signs of rooting on transects. The density of wild boar from camera trap surveys varied between 0.7 and 7 animals/km 2 . Increasing the density of camera traps above nine per km 2 did not increase the precision of the estimate of wild boar density. PAIs based on number of wild boar trails and on camera trap data appear to be more sensitive to changes in population size than PAIs based on signs of rooting. For wild boar densities similar to those recorded in this study, nine camera traps per km 2 are sufficient to estimate the mean density of wild boar. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry.

  20. A method of estimating log weights.

    Treesearch

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  1. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  2. A strategy for analysis of (molecular) equilibrium simulations: Configuration space density estimation, clustering, and visualization

    NASA Astrophysics Data System (ADS)

    Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.

    2001-02-01

    We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.

  3. Density variations and their influence on carbon stocks: case-study on two Biosphere Reserves in the Democratic Republic of Congo

    NASA Astrophysics Data System (ADS)

    De Ridder, Maaike; De Haulleville, Thalès; Kearsley, Elizabeth; Van den Bulcke, Jan; Van Acker, Joris; Beeckman, Hans

    2014-05-01

    It is commonly acknowledged that allometric equations for aboveground biomass and carbon stock estimates are improved significantly if density is included as a variable. However, not much attention is given to this variable in terms of exact, measured values and density profiles from pith to bark. Most published case-studies obtain density values from literature sources or databases, this way using large ranges of density values and possible causing significant errors in carbon stock estimates. The use of one single fixed value for density is also not recommended if carbon stock increments are estimated. Therefore, our objective is to measure and analyze a large number of tree species occurring in two Biosphere Reserves (Luki and Yangambi). Nevertheless, the diversity of tree species in these tropical forests is too high to perform this kind of detailed analysis on all tree species (> 200/ha). Therefore, we focus on the most frequently encountered tree species with high abundance (trees/ha) and dominance (basal area/ha) for this study. Increment cores were scanned with a helical X-ray protocol to obtain density profiles from pith to bark. This way, we aim at dividing the tree species with a distinct type of density profile into separate groups. If, e.g., slopes in density values from pith to bark remain stable over larger samples of one tree species, this slope could also be used to correct for errors in carbon (increment) estimates, caused by density values from simplified density measurements or density values from literature. In summary, this is most likely the first study in the Congo Basin that focuses on density patterns in order to check their influence on carbon stocks and differences in carbon stocking based on species composition (density profiles ~ temperament of tree species).

  4. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  5. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid

    PubMed Central

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  6. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    PubMed

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.

  7. Ant-inspired density estimation via random walks

    PubMed Central

    Musco, Cameron; Su, Hsin-Hao

    2017-01-01

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146

  8. Breast density estimation from high spectral and spatial resolution MRI

    PubMed Central

    Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.

    2016-01-01

    Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (p<0.0001) was obtained between left and right breast density estimations. An interclass correlation coefficient of 0.99 (p<0.0001) indicated high reliability for the inter-user variability of the HiSS-based breast density estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590

  9. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less

  10. Method for Estimating the Charge Density Distribution on a Dielectric Surface.

    PubMed

    Nakashima, Takuya; Suhara, Hiroyuki; Murata, Hidekazu; Shimoyama, Hiroshi

    2017-06-01

    High-quality color output from digital photocopiers and laser printers is in strong demand, motivating attempts to achieve fine dot reproducibility and stability. The resolution of a digital photocopier depends on the charge density distribution on the organic photoconductor surface; however, directly measuring the charge density distribution is impossible. In this study, we propose a new electron optical instrument that can rapidly measure the electrostatic latent image on an organic photoconductor surface, which is a dielectric surface, as well as a novel method to quantitatively estimate the charge density distribution on a dielectric surface by combining experimental data obtained from the apparatus via a computer simulation. In the computer simulation, an improved three-dimensional boundary charge density method (BCM) is used for electric field analysis in the vicinity of the dielectric material with a charge density distribution. This method enables us to estimate the profile and quantity of the charge density distribution on a dielectric surface with a resolution of the order of microns. Furthermore, the surface potential on the dielectric surface can be immediately calculated using the obtained charge density. This method enables the relation between the charge pattern on the organic photoconductor surface and toner particle behavior to be studied; an understanding regarding the same may lead to the development of a new generation of higher resolution photocopiers.

  11. Estimation of percentage breast tissue density: comparison between digital mammography (2D full field digital mammography) and digital breast tomosynthesis according to different BI-RADS categories.

    PubMed

    Tagliafico, A S; Tagliafico, G; Cavagnetto, F; Calabrese, M; Houssami, N

    2013-11-01

    To compare breast density estimated from two-dimensional full-field digital mammography (2D FFDM) and from digital breast tomosynthesis (DBT) according to different Breast Imaging-Reporting and Data System (BI-RADS) categories, using automated software. Institutional review board approval and written informed patient consent were obtained. DBT and 2D FFDM were performed in the same patients to allow within-patient comparison. A total of 160 consecutive patients (mean age: 50±14 years; mean body mass index: 22±3) were included to create paired data sets of 40 patients for each BI-RADS category. Automatic software (MedDensity(©), developed by Giulio Tagliafico) was used to compare the percentage breast density between DBT and 2D FFDM. The estimated breast percentage density obtained using DBT and 2D FFDM was examined for correlation with the radiologists' visual BI-RADS density classification. The 2D FFDM differed from DBT by 16.0% in BI-RADS Category 1, by 11.9% in Category 2, by 3.5% in Category 3 and by 18.1% in Category 4. These differences were highly significant (p<0.0001). There was a good correlation between the BI-RADS categories and the density evaluated using 2D FFDM and DBT (r=0.56, p<0.01 and r=0.48, p<0.01, respectively). Using DBT, breast density values were lower than those obtained using 2D FFDM, with a non-linear relationship across the BI-RADS categories. These data are relevant for clinical practice and research studies using density in determining the risk. On DBT, breast density values were lower than with 2D FFDM, with a non-linear relationship across the classical BI-RADS categories.

  12. [Spatial analysis of road traffic accidents with fatalities in Spain, 2008-2011].

    PubMed

    Gómez-Barroso, Diana; López-Cuadrado, Teresa; Llácer, Alicia; Palmera Suárez, Rocío; Fernández-Cuenca, Rafael

    2015-09-01

    To estimate the areas of greatest density of road traffic accidents with fatalities at 24 hours per km(2)/year in Spain from 2008 to 2011, using a geographic information system. Accidents were geocodified using the road and kilometer points where they occurred. The average nearest neighbor was calculated to detect possible clusters and to obtain the bandwidth for kernel density estimation. A total of 4775 accidents were analyzed, of which 73.3% occurred on conventional roads. The estimated average distance between accidents was 1,242 meters, and the average expected distance was 10,738 meters. The nearest neighbor index was 0.11, indicating that there were aggregations of accidents in space. A map showing the kernel density was obtained with a resolution of 1 km(2), which identified the areas of highest density. This methodology allowed a better approximation to locating accident risks by taking into account kilometer points. The map shows areas where there was a greater density of accidents. This could be an advantage in decision-making by the relevant authorities. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  13. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  14. A photographic technique for estimating egg density of the white pine weevil, Pissodes strobi (Peck)

    Treesearch

    Roger T. Zerillo

    1975-01-01

    Compares a photographic technique with visual and dissection techniques for estimating egg density of the white pine weevil, Pissodes strobi (Peck). The relatively high correlations (.67 and .79) between counts from photographs and those obtained by dissection indicate that the non-destructive photographic technique could be a useful tool for...

  15. Cetacean Density Estimation from Novel Acoustic Datasets by Acoustic Propagation Modeling

    DTIC Science & Technology

    2014-09-30

    hydrophone, to estimate the population density of false killer whales (Pseudorca crassidens) off of the Kona coast of the Island of Hawai’i... killer whale , suffers from interaction with the fisheries industry and its population has been reported to have declined in the past 20 years. Studies...of abundance estimate of false killer whales in Hawai’i through mark recapture methods will provide comparable results to the ones obtained by this

  16. Cetacean Density Estimation from Novel Acoustic Datasets by Acoustic Propagation Modeling

    DTIC Science & Technology

    2013-09-30

    hydrophone, to estimate the population density of false killer whales (Pseudorca crassidens) off of the Kona coast of the Island of Hawai’i. OBJECTIVES...propagation due to the complexities of its environment. Moreover, the target species chosen for the proposed work, the false killer whale , suffers...estimate of false killer whales in Hawai’i through mark recapture methods will provide comparable results to the ones obtained by this project. The ultimate

  17. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  18. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data

    PubMed Central

    Broekhuis, Femke; Gopalaswamy, Arjun M.

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614

  19. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data.

    PubMed

    Broekhuis, Femke; Gopalaswamy, Arjun M

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  20. Feasibility of using 'lung density' values estimated from EIT images for clinical diagnosis of lung abnormalities in mechanically ventilated ICU patients.

    PubMed

    Nebuya, Satoru; Koike, Tomotaka; Imai, Hiroshi; Iwashita, Yoshiaki; Brown, Brian H; Soma, Kazui

    2015-06-01

    This paper reports on the results of a study which compares lung density values obtained from electrical impedance tomography (EIT), clinical diagnosis and CT values (HU) within a region of interest in the lung. The purpose was to assess the clinical use of lung density estimation using EIT data. In 11 patients supported by a mechanical ventilator, the consistency of regional lung density measurements as estimated by EIT was validated to assess the feasibility of its use in intensive care medicine. There were significant differences in regional lung densities recorded in the supine position between normal lungs and diseased lungs associated with pneumonia, atelectasis and pleural effusion (normal; 240 ± 71.7 kg m(-3), pneumonia; 306 ± 38.6 kg m(-3), atelectasis; 497 ± 130 kg m(-3), pleural effusion; 467 ± 113 kg m(-3): Steel-Dwass test, p < 0.05). In addition, in order to compare lung density with CT image pixels, the image resolution of CT images, which was originally 512 × 512 pixels, was changed to 16 × 16 pixels to match that of the EIT images. The results of CT and EIT images from five patients in an intensive care unit showed a correlation coefficient of 0.66 ± 0.13 between the CT values (HU) and the lung density values (kg m(-3)) obtained from EIT. These results indicate that it may be possible to obtain a quantitative value for regional lung density using EIT.

  1. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  2. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  3. Equation of State for Isospin Asymmetric Nuclear Matter Using Lane Potential

    NASA Astrophysics Data System (ADS)

    Basu, D. N.; Chowdhury, P. Roy; Samanta, C.

    2006-10-01

    A mean field calculation for obtaining the equation of state (EOS) for symmetric nuclear matter from a density dependent M3Y interaction supplemented by a zero-range potential is described. The energy per nucleon is minimized to obtain the ground state of symmetric nuclear matter. The saturation energy per nucleon used for nuclear matter calculations is determined from the co-efficient of the volume term of Bethe--Weizsäcker mass formula which is evaluated by fitting the recent experimental and estimated atomic mass excesses from Audi--Wapstra--Thibault atomic mass table by minimizing the mean square deviation. The constants of density dependence of the effective interaction are obtained by reproducing the saturation energy per nucleon and the saturation density of spin and isospin symmetric cold infinite nuclear matter. The EOS of symmetric nuclear matter, thus obtained, provide reasonably good estimate of nuclear incompressibility. Once the constants of density dependence are determined, EOS for asymmetric nuclear matter is calculated by adding to the isoscalar part, the isovector component of the M3Y interaction that do not contribute to the EOS of symmetric nuclear matter. These EOS are then used to calculate the pressure, the energy density and the velocity of sound in symmetric as well as isospin asymmetric nuclear matter.

  4. State of charge monitoring of vanadium redox flow batteries using half cell potentials and electrolyte density

    NASA Astrophysics Data System (ADS)

    Ressel, Simon; Bill, Florian; Holtz, Lucas; Janshen, Niklas; Chica, Antonio; Flower, Thomas; Weidlich, Claudia; Struckmann, Thorsten

    2018-02-01

    The operation of vanadium redox flow batteries requires reliable in situ state of charge (SOC) monitoring. In this study, two SOC estimation approaches for the negative half cell are investigated. First, in situ open circuit potential measurements are combined with Coulomb counting in a one-step calibration of SOC and Nernst potential which doesn't need additional reference SOCs. In-sample and out-of-sample SOCs are estimated and analyzed, estimation errors ≤ 0.04 are obtained. In the second approach, temperature corrected in situ electrolyte density measurements are used for the first time in vanadium redox flow batteries for SOC estimation. In-sample and out-of-sample SOC estimation errors ≤ 0.04 demonstrate the feasibility of this approach. Both methods allow recalibration during battery operation. The actual capacity obtained from SOC calibration can be used in a state of health model.

  5. Improving the Navy’s Passive Underwater Acoustic Monitoring of Marine Mammal Populations

    DTIC Science & Technology

    2013-09-30

    passive acoustic monitoring: Correcting humpback whale call detections for site-specific and time-dependent environmental characteristics ,” JASA Exp...marine mammal species using passive acoustic monitoring, with application to obtaining density estimates of transiting humpback whale populations in...minimize the variance of the density estimates, 3) to apply the numerical modeling methods for humpback whale vocalizations to understand distortions

  6. Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio

    We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex  = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less

  7. Double sampling to estimate density and population trends in birds

    USGS Publications Warehouse

    Bart, Jonathan; Earnst, Susan L.

    2002-01-01

    We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

  8. Volumes and bulk densities of forty asteroids from ADAM shape modeling

    NASA Astrophysics Data System (ADS)

    Hanuš, J.; Viikinkoski, M.; Marchis, F.; Ďurech, J.; Kaasalainen, M.; Delbo', M.; Herald, D.; Frappa, E.; Hayamizu, T.; Kerr, S.; Preston, S.; Timerson, B.; Dunham, D.; Talbot, J.

    2017-05-01

    Context. Disk-integrated photometric data of asteroids do not contain accurate information on shape details or size scale. Additional data such as disk-resolved images or stellar occultation measurements further constrain asteroid shapes and allow size estimates. Aims: We aim to use all the available disk-resolved images of approximately forty asteroids obtained by the Near-InfraRed Camera (Nirc2) mounted on the W.M. Keck II telescope together with the disk-integrated photometry and stellar occultation measurements to determine their volumes. We can then use the volume, in combination with the known mass, to derive the bulk density. Methods: We downloaded and processed all the asteroid disk-resolved images obtained by the Nirc2 that are available in the Keck Observatory Archive (KOA). We combined optical disk-integrated data and stellar occultation profiles with the disk-resolved images and use the All-Data Asteroid Modeling (ADAM) algorithm for the shape and size modeling. Our approach provides constraints on the expected uncertainty in the volume and size as well. Results: We present shape models and volume for 41 asteroids. For 35 of these asteroids, the knowledge of their mass estimates from the literature allowed us to derive their bulk densities. We see a clear trend of lower bulk densities for primitive objects (C-complex) and higher bulk densities for S-complex asteroids. The range of densities in the X-complex is large, suggesting various compositions. We also identified a few objects with rather peculiar bulk densities, which is likely a hint of their poor mass estimates. Asteroid masses determined from the Gaia astrometric observations should further refine most of the density estimates.

  9. A method to estimate statistical errors of properties derived from charge-density modelling

    PubMed Central

    Lecomte, Claude

    2018-01-01

    Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964

  10. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    PubMed

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  11. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  12. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  13. Estimation of kinetic parameters from list-mode data using an indirect apporach

    NASA Astrophysics Data System (ADS)

    Ortiz, Joseph Christian

    This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.

  14. Estimating snow leopard population abundance using photography and capture-recapture techniques

    USGS Publications Warehouse

    Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.

    2006-01-01

    Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.

  15. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  16. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    PubMed

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  17. A photometric method for the estimation of the oil yield of oil shale

    USGS Publications Warehouse

    Cuttitta, Frank

    1951-01-01

    A method is presented for the distillation and photometric estimation of the oil yield of oil-bearing shales. The oil shale is distilled in a closed test tube and the oil extracted with toluene. The optical density of the toluene extract is used in the estimation of oil content and is converted to percentage of oil by reference to a standard curve. This curve is obtained by relating the oil yields determined by the Fischer assay method to the optical density of the toluene extract of the oil evolved by the new procedure. The new method gives results similar to those obtained by the Fischer assay method in a much shorter time. The applicability of the new method to oil-bearing shale and phosphatic shale has been tested.

  18. Possibilities for Estimating Horizontal Electrical Currents in Active Regions on the Sun

    NASA Astrophysics Data System (ADS)

    Fursyak, Yu. A.; Abramenko, V. I.

    2017-12-01

    Part of the "free" magnetic energy associated with electrical current systems in the active region (AR) is released during solar flares. This proposition is widely accepted and it has stimulated interest in detecting electrical currents in active regions. The vertical component of an electric current in the photosphere can be found by observing the transverse magnetic field. At present, however, there are no direct methods for calculating transverse electric currents based on these observations. These calculations require information on the field vector measured simultaneously at several levels in the photosphere, which has not yet been done with solar instrumentation. In this paper we examine an approach to calculating the structure of the square of the density of a transverse electrical current based on a magnetogram of the vertical component of the magnetic field in the AR. Data obtained with the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory (SDO) for the AR of NOAA AR 11283 are used. It is shown that (1) the observed variations in the magnetic field of a sunspot and the proposed estimate of the density of an annular horizontal current around the spot are consistent with Faraday's law and (2) the resulting estimates of the magnitude of the square of the density of the horizontal current {j}_{\\perp}^2 = (0.002- 0.004) A2/m4 are consistent with previously obtained values of the density of a vertical current in the photosphere. Thus, the proposed estimate is physically significant and this method can be used to estimate the density and structure of transverse electrical currents in the photosphere.

  19. Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis

    NASA Astrophysics Data System (ADS)

    Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles

    2018-03-01

    We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.

  20. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.

    PubMed

    Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris

    2016-04-21

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  1. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung

    NASA Astrophysics Data System (ADS)

    Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris

    2016-04-01

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  2. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  3. Electron diffusion deduced from eiscat

    NASA Astrophysics Data System (ADS)

    Roettger, J.; Fukao, S.

    The EISCAT Svalbard Radar (ESR) operates on 500 MHz; collocated with it is the SOUSY Svalbard Radar (SSR), which operates on 53.5 MHz. We have used both radars during Polar Mesosphere Summer Echoes (PMSE) coherent scatter conditions, where the ESR can also detect incoherent scatter and thus allows to estimate the electron density. We describe obser-vations during two observing periods in summer 1999 and 2000. Well calibrated sig-nal power was obtained with both radars, from which we deduced the radar reflec-tivity. Estimating the turbulence dissipation rate from the narrow beam observations of PMSE with the ESR, using the estimate of the electron density and the radar reflec-tivity on both frequencies we can obtain estimates of the Schmidt number by compar-ing our observational results with the model of Cho and Kelley (1993). Schmidt num-bers of at least 100 are necessary to obtain the measured radar reflectivities, which ba-sically support the model of Cho and Kelley claiming that the inertial-viscous subrange in the electron gas can extend down to small scales of some ten centimeters (namely, the Bragg scale of the ESR).

  4. The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions

    USGS Publications Warehouse

    Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

    2002-01-01

    The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

  5. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

  6. Evaluating sampling strategies for larval cisco (Coregonus artedi)

    USGS Publications Warehouse

    Myers, J.T.; Stockwell, J.D.; Yule, D.L.; Black, J.A.

    2008-01-01

    To improve our ability to assess larval cisco (Coregonus artedi) populations in Lake Superior, we conducted a study to compare several sampling strategies. First, we compared density estimates of larval cisco concurrently captured in surface waters with a 2 x 1-m paired neuston net and a 0.5-m (diameter) conical net. Density estimates obtained from the two gear types were not significantly different, suggesting that the conical net is a reasonable alternative to the more cumbersome and costly neuston net. Next, we assessed the effect of tow pattern (sinusoidal versus straight tows) to examine if propeller wash affected larval density. We found no effect of propeller wash on the catchability of larval cisco. Given the availability of global positioning systems, we recommend sampling larval cisco using straight tows to simplify protocols and facilitate straightforward measurements of volume filtered. Finally, we investigated potential trends in larval cisco density estimates by sampling four time periods during the light period of a day at individual sites. Our results indicate no significant trends in larval density estimates during the day. We conclude estimates of larval cisco density across space are not confounded by time at a daily timescale. Well-designed, cost effective surveys of larval cisco abundance will help to further our understanding of this important Great Lakes forage species.

  7. Planar imaging of OH density distributions in a supersonic combustion tunnel

    NASA Technical Reports Server (NTRS)

    Quagliaroli, T. M.; Laufer, G.; Krauss, R. H.; Mcdaniel, J. C., Jr.

    1993-01-01

    Images of absolute OH number density were obtained using planar laser-induced fluorescence (PLIF) in a supersonic H2-air combustion tunnel. A tunable KrF excimer laser was used to excite the Q2(11) ro-vibronic line. Calibration of the PLIF images was obtained by referencing the signal measured in the flame to that obtained by the excitation of OH produced by thermal dissociation of H2O in an atmospheric furnace. Measurement errors due to uncertainty in internal furnace atmospheric conditions and image temperature correction are estimated.

  8. Modeling Earth's surface topography: decomposition of the static and dynamic components

    NASA Astrophysics Data System (ADS)

    Guerri, M.; Cammarano, F.; Tackley, P. J.

    2017-12-01

    Isolating the portion of topography supported by mantle convection, the so-called dynamic topography, would give us precious information on vigor and style of the convection itself. Contrasting results on the estimate of dynamic topography motivate us to analyse the sources of uncertainties affecting its modeling. We obtain models of mantle and crust density, leveraging on seismic and mineral physics constraints. We use the models to compute isostatic topography and residual topography maps. Estimates of dynamic topography and associated synthetic geoid are obtained by instantaneous mantle flow modeling. We test various viscosity profiles and 3D viscosity distributions accounting for inferred lateral variations in temperature. We find that the patterns of residual and dynamic topography are robust, with an average correlation coefficient of 0.74 and 0.71, respectively. The amplitudes are however poorly constrained. For the static component, the considered lithospheric mantle density models result in topographies that differ, on average, 720 m, with peaks reaching 1.7 km. The crustal density models produce variations in isostatic topography averaging 350 m, with peaks of 1 km. For the dynamic component, we obtain peak-to-peak topography amplitude exceeding 3 km for all the tested mantle density and viscosity models. Such values of dynamic topography produce geoid undulations that are not in agreement with observations. Assuming chemical heterogeneities in the lower mantle, in correspondence with the LLSVPs (Large Low Shear wave Velocity Provinces), helps to decrease the amplitudes of dynamic topography and geoid, but reduces the correlation between synthetic and observed geoid. The correlation coefficients between the residual and dynamic topography maps is always less than 0.55. In general, our results indicate that, i) current knowledge of crust density, mantle density and mantle viscosity is still limited, ii) it is important to account for all the various sources of uncertainties when computing static and dynamic topography. In conclusion, a multidisciplinary approach, which involves multiple geophysics observations and constraints from mineral physics, is necessary for obtaining robust density models and, consequently, for properly estimating the dynamic topography.

  9. Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, David

    1992-01-01

    Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.

  10. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

  11. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  12. Mark-recapture using tetracycline and genetics reveal record-high bear density

    USGS Publications Warehouse

    Peacock, E.; Titus, K.; Garshelis, D.L.; Peacock, M.M.; Kuc, M.

    2011-01-01

    We used tetracycline biomarking, augmented with genetic methods to estimate the size of an American black bear (Ursus americanus) population on an island in Southeast Alaska. We marked 132 and 189 bears that consumed remote, tetracycline-laced baits in 2 different years, respectively, and observed 39 marks in 692 bone samples subsequently collected from hunters. We genetically analyzed hair samples from bait sites to determine the sex of marked bears, facilitating derivation of sex-specific population estimates. We obtained harvest samples from beyond the study area to correct for emigration. We estimated a density of 155 independent bears/100 km2, which is equivalent to the highest recorded for this species. This high density appears to be maintained by abundant, accessible natural food. Our population estimate (approx. 1,000 bears) could be used as a baseline and to set hunting quotas. The refined biomarking method for abundance estimation is a useful alternative where physical captures or DNA-based estimates are precluded by cost or logistics. Copyright ?? 2011 The Wildlife Society.

  13. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  14. The Two-Component Model for Calculating Total Body Fat from Body Density: An Evaluation in Healthy Women before, during and after Pregnancy

    PubMed Central

    Forsum, Elisabet; Henriksson, Pontus; Löf, Marie

    2014-01-01

    A possibility to assess body composition during pregnancy is often important. Estimating body density (DB) and use the two-component model (2CM) to calculate total body fat (TBF) represents an option. However, this approach has been insufficiently evaluated during pregnancy. We evaluated the 2CM, and estimated fat-free mass (FFM) density and variability in 17 healthy women before pregnancy, in gestational weeks 14 and 32, and 2 weeks postpartum based on DB (underwater weighing), total body water (deuterium dilution) and body weight, assessed on these four occasions. TBF, calculated using the 2CM and published FFM density (TBF2CM), was compared to reference estimates obtained using the three-component model (TBF3CM). TBF2CM minus TBF3CM (mean ± 2SD) was −1.63 ± 5.67 (p = 0.031), −1.39 ± 7.75 (p = 0.16), −0.38 ± 4.44 (p = 0.49) and −1.39 ± 5.22 (p = 0.043) % before pregnancy, in gestational weeks 14 and 32 and 2 weeks postpartum, respectively. The effect of pregnancy on the variability of FFM density was larger in gestational week 14 than in gestational week 32. The 2CM, based on DB and published FFM density, assessed body composition as accurately in gestational week 32 as in non-pregnant adults. Corresponding values in gestational week 14 were slightly less accurate than those obtained before pregnancy. PMID:25526240

  15. Estimating large carnivore populations at global scale based on spatial predictions of density and distribution – Application to the jaguar (Panthera onca)

    PubMed Central

    Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard

    2018-01-01

    Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129

  16. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  17. Covariance and correlation estimation in electron-density maps.

    PubMed

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  18. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less

  19. [Population estimates and conservation of felids (Carnivora: Felidae) in Northern Quintana Roo, Mexico].

    PubMed

    Ávila-Nájera, Dulce María; Chávez, Cuauhtémoc; Lazcano-Barrero, Marco A; Pérez-Elizalde, Sergio; Alcántara-Carbajal, José Luis

    2015-09-01

    Wildlife density estimates provide an idea of the current state of populations, and in some cases, reflect the conservation status of ecosystems, essential aspects for effective management actions. In Mexico, several regions have been identified as high priority areas for the conservation of species that have some level of risk, like the Yucatan Peninsula (YP), where the country has the largest population of jaguars. However, little is known about the current status of threatened and endangered felids, which coexist in the Northeastern portion of the Peninsula. Our objective was to estimate the wild cats' density population over time at El Eden Ecological Reserve (EEER) and its surrounding areas. Camera trap surveys over four years (2008, 2010, 2011 and 2012) were conducted, and data were obtained with the use of capture-recapture models for closed populations (CAPTURE + MMDM or 1/2 MMDM), and the spatially explicit capture-recapture model (SPACECAP). The species studied were jaguar (Panthera onca), puma (Puma concolor), ocelot (Leopardus pardalis), jaguarundi (Puma yaguaroundi) and margay (Leopardus wiedii). Capture frequency was obtained for all five species and the density for three (individuals/100km2). The density estimated with The Mean Maximum Distance Moved (MMDM), CAPTURE, ranged from 1.2 to 2.6 for jaguars, from 1.7 to 4.3 for pumas and from 1.4 to 13.8 for ocelots. The density estimates in SPACECAP ranged from 0.7 to 3.6 for jaguars, from 1.8 to 5.2 for pumas and 2.1 to 5.1 for ocelots. Spatially explicit capture recapture (SECR) methods in SPACECAP were less likely to overestimate densities, making it a useful tool in the planning and decision making process for the conservation of these species. The Northeastern portion of the Yucatan Peninsula maintains high populations of cats, the EEER and its surrounding areas are valuable sites for the conservation of this group of predators. Rev. Biol.

  20. Emission measures derived from far ultraviolet spectra of T Tauri stars

    NASA Astrophysics Data System (ADS)

    Cram, L. E.; Giampapa, M. S.; Imhoff, C. L.

    1980-06-01

    Spectroscopic diagnostics based on UV emission line observations have been developed to study the solar chromosphere, transition region, and corona. The atmospheric properties that can be inferred from observations of total line intensities include the temperature, by identifying the ionic species present; the temperature distribution of the emission measure, from the absolute intensities; and the electron density of the source, from line intensity ratios sensitive to the electron density. In the present paper, the temperature distribution of the emission measure is estimated from observations of far UV emission line fluxes of the T Tauri stars, RW Aurigae and RU Lupi, made on the IUE. A crude estimate of the electron density of one star is obtained, using density-sensitive line ratios.

  1. Variability of dental cone beam CT grey values for density estimations

    PubMed Central

    Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K

    2013-01-01

    Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537

  2. Phase diagram and universality of the Lennard-Jones gas-liquid system.

    PubMed

    Watanabe, Hiroshi; Ito, Nobuyasu; Hu, Chin-Kun

    2012-05-28

    The gas-liquid phase transition of the three-dimensional Lennard-Jones particles system is studied by molecular dynamics simulations. The gas and liquid densities in the coexisting state are determined with high accuracy. The critical point is determined by the block density analysis of the Binder parameter with the aid of the law of rectilinear diameter. From the critical behavior of the gas-liquid coexisting density, the critical exponent of the order parameter is estimated to be β = 0.3285(7). Surface tension is estimated from interface broadening behavior due to capillary waves. From the critical behavior of the surface tension, the critical exponent of the correlation length is estimated to be ν = 0.63(4). The obtained values of β and ν are consistent with those of the Ising universality class.

  3. Estimating population density for disease risk assessment: The importance of understanding the area of influence of traps using wild pigs as an example.

    PubMed

    Davis, Amy J; Leland, Bruce; Bodenchuk, Michael; VerCauteren, Kurt C; Pepin, Kim M

    2017-06-01

    Population density is a key driver of disease dynamics in wildlife populations. Accurate disease risk assessment and determination of management impacts on wildlife populations requires an ability to estimate population density alongside management actions. A common management technique for controlling wildlife populations to monitor and mitigate disease transmission risk is trapping (e.g., box traps, corral traps, drop nets). Although abundance can be estimated from trapping actions using a variety of analytical approaches, inference is limited by the spatial extent to which a trap attracts animals on the landscape. If the "area of influence" were known, abundance estimates could be converted to densities. In addition to being an important predictor of contact rate and thus disease spread, density is more informative because it is comparable across sites of different sizes. The goal of our study is to demonstrate the importance of determining the area sampled by traps (area of influence) so that density can be estimated from management-based trapping designs which do not employ a trapping grid. To provide one example of how area of influence could be calculated alongside management, we conducted a small pilot study on wild pigs (Sus scrofa) using two removal methods 1) trapping followed by 2) aerial gunning, at three sites in northeast Texas in 2015. We estimated abundance from trapping data with a removal model. We calculated empirical densities as aerial counts divided by the area searched by air (based on aerial flight tracks). We inferred the area of influence of traps by assuming consistent densities across the larger spatial scale and then solving for area impacted by the traps. Based on our pilot study we estimated the area of influence for corral traps in late summer in Texas to be ∼8.6km 2 . Future work showing the effects of behavioral and environmental factors on area of influence will help mangers obtain estimates of density from management data, and determine conditions where trap-attraction is strongest. The ability to estimate density alongside population control activities will improve risk assessment and response operations against disease outbreaks. Published by Elsevier B.V.

  4. Estimating the number of terrestrial organisms on the moon.

    NASA Technical Reports Server (NTRS)

    Dillon, R. T.; Gavin, W. R.; Roark, A. L.; Trauth, C. A., Jr.

    1973-01-01

    Methods used to obtain estimates for the biological loadings on moon bound spacecraft prior to launch are reviewed, along with the mathematical models used to calculate the microorganism density on the lunar surface (such as it results from contamination deposited by manned and unmanned flights) and the probability of lunar soil sample contamination. Some of the results obtained by the use of a lunar inventory system based on these models are presented.

  5. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  6. Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations

    NASA Astrophysics Data System (ADS)

    Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

    2009-04-01

    Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

  7. Estimating the spatial distribution of soil organic matter density and geochemical properties in a polygonal shaped Arctic Tundra using core sample analysis and X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Soom, F.; Ulrich, C.; Dafflon, B.; Wu, Y.; Kneafsey, T. J.; López, R. D.; Peterson, J.; Hubbard, S. S.

    2016-12-01

    The Arctic tundra with its permafrost dominated soils is one of the regions most affected by global climate change, and in turn, can also influence the changing climate through biogeochemical processes, including greenhouse gas release or storage. Characterization of shallow permafrost distribution and characteristics are required for predicting ecosystem feedbacks to a changing climate over decadal to century timescales, because they can drive active layer deepening and land surface deformation, which in turn can significantly affect hydrological and biogeochemical responses, including greenhouse gas dynamics. In this study, part of the Next-Generation Ecosystem Experiment (NGEE-Arctic), we use X-ray computed tomography (CT) to estimate wet bulk density of cores extracted from a field site near Barrow AK, which extend 2-3m through the active layer into the permafrost. We use multi-dimensional relationships inferred from destructive core sample analysis to infer organic matter density, dry bulk density and ice content, along with some geochemical properties from nondestructive CT-scans along the entire length of the cores, which was not obtained by the spatially limited destructive laboratory analysis. Multi-parameter cross-correlations showed good agreement between soil properties estimated from CT scans versus properties obtained through destructive sampling. Soil properties estimated from cores located in different types of polygons provide valuable information about the vertical distribution of soil and permafrost properties as a function of geomorphology.

  8. A bottom up approach to on-road CO2 emissions estimates: improved spatial accuracy and applications for regional planning.

    PubMed

    Gately, Conor K; Hutyra, Lucy R; Wing, Ian Sue; Brondfield, Max N

    2013-03-05

    On-road transportation is responsible for 28% of all U.S. fossil-fuel CO2 emissions. Mapping vehicle emissions at regional scales is challenging due to data limitations. Existing emission inventories use spatial proxies such as population and road density to downscale national or state-level data. Such procedures introduce errors where the proxy variables and actual emissions are weakly correlated, and limit analysis of the relationship between emissions and demographic trends at local scales. We develop an on-road emission inventory product for Massachusetts-based on roadway-level traffic data obtained from the Highway Performance Monitoring System (HPMS). We provide annual estimates of on-road CO2 emissions at a 1 × 1 km grid scale for the years 1980 through 2008. We compared our results with on-road emissions estimates from the Emissions Database for Global Atmospheric Research (EDGAR), with the Vulcan Product, and with estimates derived from state fuel consumption statistics reported by the Federal Highway Administration (FHWA). Our model differs from FHWA estimates by less than 8.5% on average, and is within 5% of Vulcan estimates. We found that EDGAR estimates systematically exceed FHWA by an average of 22.8%. Panel regression analysis of per-mile CO2 emissions on population density at the town scale shows a statistically significant correlation that varies systematically in sign and magnitude as population density increases. Population density has a positive correlation with per-mile CO2 emissions for densities below 2000 persons km(-2), above which increasing density correlates negatively with per-mile emissions.

  9. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  10. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  11. Technical Note: Cortical thickness and density estimation from clinical CT using a prior thickness-density relationship

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, Ludovic, E-mail: ludohumberto@gmail.com; Hazrati Marangalou, Javad; Rietbergen, Bert van

    Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was usedmore » as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm{sup 3}) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm{sup 3}), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm{sup 3}) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm{sup 3}). A trend for the cortical thickness and density estimation errors to increase with voxel size was observed and was more pronounced for thin cortices. Using clinical CT data for 19 of the 23 samples, mean errors of 0.18 ± 0.24 mm for the cortical thickness and 15 ± 106 mg/cm{sup 3} for the density were found. The case-control study showed that osteoporotic patients had a thinner cortex and a lower cortical density, with average differences of −0.8 mm and −58.6 mg/cm{sup 3} at the proximal femur in comparison with age-matched controls (p-value < 0.001). Conclusions: This method might be a promising approach for the quantification of cortical bone thickness and density using clinical routine imaging techniques. Future work will concentrate on investigating how this approach can improve the estimation of mechanical strength of bony structures, the prevention of fracture, and the management of osteoporosis.« less

  12. Studies on the ionospheric-thermospheric coupling mechanisms using SLR

    NASA Astrophysics Data System (ADS)

    Panzetta, Francesca; Erdogan, Eren; Bloßfeld, Mathis; Schmidt, Michael

    2016-04-01

    Several Low Earth Orbiters (LEOs) have been used by different research groups to model the thermospheric neutral density distribution at various altitudes performing Precise Orbit Determination (POD) in combination with satellite accelerometry. This approach is, in principle, based on satellite drag analysis, driven by the fact that the drag force is one of the major perturbing forces acting on LEOs. The satellite drag itself is physically related to the thermospheric density. The present contribution investigates the possibility to compute the thermospheric density from Satellite Laser Ranging (SLR) observations. SLR is commonly used to compute very accurate satellite orbits. As a prerequisite, a very high precise modelling of gravitational and non-gravitational accelerations is necessary. For this investigation, a sensitivity study of SLR observations to thermospheric density variations is performed using the DGFI Orbit and Geodetic parameter estimation Software (DOGS). SLR data from satellites at altitudes lower than 500 km are processed adopting different thermospheric models. The drag coefficients which describe the interaction of the satellite surfaces with the atmosphere are analytically computed in order to obtain scaling factors purely related to the thermospheric density. The results are reported and discussed in terms of estimates of scaling coefficients of the thermospheric density. Besides, further extensions and improvements in thermospheric density modelling obtained by combining a physics-based approach with ionospheric observations are investigated. For this purpose, the coupling mechanisms between the thermosphere and ionosphere are studied.

  13. Monte Carlo simulation of hard spheres near random closest packing using spherical boundary conditions

    NASA Astrophysics Data System (ADS)

    Tobochnik, Jan; Chapin, Phillip M.

    1988-05-01

    Monte Carlo simulations were performed for hard disks on the surface of an ordinary sphere and hard spheres on the surface of a four-dimensional hypersphere. Starting from the low density fluid the density was increased to obtain metastable amorphous states at densities higher than previously achieved. Above the freezing density the inverse pressure decreases linearly with density, reaching zero at packing fractions equal to 68% for hard spheres and 84% for hard disks. Using these new estimates for random closest packing and coefficients from the virial series we obtain an equation of state which fits all the data up to random closest packing. Usually, the radial distribution function showed the typical split second peak characteristic of amorphous solids and glasses. High density systems which lacked this split second peak and showed other sharp peaks were interpreted as signaling the onset of crystal nucleation.

  14. Marine mammal tracks from two-hydrophone acoustic recordings made with a glider

    NASA Astrophysics Data System (ADS)

    Küsel, Elizabeth T.; Munoz, Tessa; Siderius, Martin; Mellinger, David K.; Heimlich, Sara

    2017-04-01

    A multinational oceanographic and acoustic sea experiment was carried out in the summer of 2014 off the western coast of the island of Sardinia, Mediterranean Sea. During this experiment, an underwater glider fitted with two hydrophones was evaluated as a potential tool for marine mammal population density estimation studies. An acoustic recording system was also tested, comprising an inexpensive, off-the-shelf digital recorder installed inside the glider. Detection and classification of sounds produced by whales and dolphins, and sometimes tracking and localization, are inherent components of population density estimation from passive acoustics recordings. In this work we discuss the equipment used as well as analysis of the data obtained, including detection and estimation of bearing angles. A human analyst identified the presence of sperm whale (Physeter macrocephalus) regular clicks as well as dolphin clicks and whistles. Cross-correlating clicks recorded on both data channels allowed for the estimation of the direction (bearing) of clicks, and realization of animal tracks. Insights from this bearing tracking analysis can aid in population density estimation studies by providing further information (bearings), which can improve estimates.

  15. Measurement of wave-front aberration in a small telescope remote imaging system using scene-based wave-front sensing

    DOEpatents

    Poyneer, Lisa A; Bauman, Brian J

    2015-03-31

    Reference-free compensated imaging makes an estimation of the Fourier phase of a series of images of a target. The Fourier magnitude of the series of images is obtained by dividing the power spectral density of the series of images by an estimate of the power spectral density of atmospheric turbulence from a series of scene based wave front sensor (SBWFS) measurements of the target. A high-resolution image of the target is recovered from the Fourier phase and the Fourier magnitude.

  16. 4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model

    NASA Astrophysics Data System (ADS)

    Tuna, Hakan; Arikan, Feza; Arikan, Orhan

    2016-07-01

    Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.

  17. Estimates of brown bear abundance on Kodiak Island, Alaska

    USGS Publications Warehouse

    Barnes, V.G.; Smith, R.B.

    1998-01-01

    During 1987-94 we used capture-mark-resight (CMR) methodology and rates of observation (bears/hour and bears/100 km2) of unmarked brown bears (Ursus arctos middendorffi) during intensive aerial surveys (IAS) to estimate abundance of brown bears on Kodiak Island and to establish a baseline for monitoring population trends. CMR estimates were obtained on 3 study areas; density ranged from 216-234 bears/1,000 km2 for independent animals and 292-342 bears/1,000 km2 including dependent offspring. Rates of observation during IAS ranged from 1.4-5.4 independent bears/hour and 2.9-18.0 independent bears/100 km2. Density estimates for independent bears on each IAS area were obtained by dividing mean number of bears observed during replicate surveys by estimated sightability (based on CMR-derived sightability in areas with similar habitat. Brown bear abundance on 21 geographic units of Kodiak Island and 3 nearby islands was estimated by extrapolation from CMR and IAS data using comparisons of habitat characteristics and sport harvest information. Population estimates for independent and total bears were 1,800 and 2,600. The CMR and IAS procedures offer alternative means, depending on management objective and available resources, of measuring population trend of brown bears on Kodiak Island.

  18. Computing the Power-Density Spectrum for an Engineering Model

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1982-01-01

    Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.

  19. Moments of the phase-space density, coincidence probabilities, and entropies of a multiparticle system

    NASA Astrophysics Data System (ADS)

    Bialas, A.

    2006-04-01

    A method to estimate moments of the phase-space density from event-by-event fluctuations is reviewed and its accuracy analyzed. Relation of these measurements to the determination of the entropy of the system is discussed. This is a summary of the results obtained recently together with W.Czyz and K.Zalewski.

  20. A simple model to predict the biodiesel blend density as simultaneous function of blend percent and temperature.

    PubMed

    Gaonkar, Narayan; Vaidya, R G

    2016-05-01

    A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.

  1. An evaluation of a bioelectrical impedance analyser for the estimation of body fat content.

    PubMed Central

    Maughan, R J

    1993-01-01

    Measurement of body composition is an important part of any assessment of health or fitness. Hydrostatic weighing is generally accepted as the most reliable method for the measurement of body fat content, but is inconvenient. Electrical impedance analysers have recently been proposed as an alternative to the measurement of skinfold thickness. Both these latter methods are convenient, but give values based on estimates obtained from population studies. This study compared values of body fat content obtained by hydrostatic weighing, skinfold thickness measurement and electrical impedance on 50 (28 women, 22 men) healthy volunteers. Mean(s.e.m.) values obtained by the three methods were: hydrostatic weighing, 20.5(1.2)%; skinfold thickness, 21.8(1.0)%; impedance, 20.8(0.9)%. The results indicate that the correlation between the skinfold method and hydrostatic weighing (0.931) is somewhat higher than that between the impedance method and hydrostatic weighing (0.830). This is, perhaps, not surprising given the fact that the impedance method is based on an estimate of total body water which is then used to calculate body fat content. The skinfold method gives an estimate of body density, and the assumptions involved in the conversion from body density to body fat content are the same for both methods. PMID:8457817

  2. Transition from order to chaos, and density limit, in magnetized plasmas.

    PubMed

    Carati, A; Zuin, M; Maiocchi, A; Marino, M; Martines, E; Galgani, L

    2012-09-01

    It is known that a plasma in a magnetic field, conceived microscopically as a system of point charges, can exist in a magnetized state, and thus remain confined, inasmuch as it is in an ordered state of motion, with the charged particles performing gyrational motions transverse to the field. Here, we give an estimate of a threshold, beyond which transverse motions become chaotic, the electrons being unable to perform even one gyration, so that a breakdown should occur, with complete loss of confinement. The estimate is obtained by the methods of perturbation theory, taking as perturbing force acting on each electron that due to the so-called microfield, i.e., the electric field produced by all the other charges. We first obtain a general relation for the threshold, which involves the fluctuations of the microfield. Then, taking for such fluctuations, the formula given by Iglesias, Lebowitz, and MacGowan for the model of a one component plasma with neutralizing background, we obtain a definite formula for the threshold, which corresponds to a density limit increasing as the square of the imposed magnetic field. Such a theoretical density limit is found to fit pretty well the empirical data for collapses of fusion machines.

  3. When bulk density methods matter: Implications for estimating soil organic carbon pools in rocky soils

    USDA-ARS?s Scientific Manuscript database

    Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

  4. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  5. Malthusian Parameters as Estimators of the Fitness of Microbes: A Cautionary Tale about the Low Side of High Throughput.

    PubMed

    Concepción-Acevedo, Jeniffer; Weiss, Howard N; Chaudhry, Waqas Nasir; Levin, Bruce R

    2015-01-01

    The maximum exponential growth rate, the Malthusian parameter (MP), is commonly used as a measure of fitness in experimental studies of adaptive evolution and of the effects of antibiotic resistance and other genes on the fitness of planktonic microbes. Thanks to automated, multi-well optical density plate readers and computers, with little hands-on effort investigators can readily obtain hundreds of estimates of MPs in less than a day. Here we compare estimates of the relative fitness of antibiotic susceptible and resistant strains of E. coli, Pseudomonas aeruginosa and Staphylococcus aureus based on MP data obtained with automated multi-well plate readers with the results from pairwise competition experiments. This leads us to question the reliability of estimates of MP obtained with these high throughput devices and the utility of these estimates of the maximum growth rates to detect fitness differences.

  6. Comparison of accelerometer data calibration methods used in thermospheric neutral density estimation

    NASA Astrophysics Data System (ADS)

    Vielberg, Kristin; Forootan, Ehsan; Lück, Christina; Löcher, Anno; Kusche, Jürgen; Börger, Klaus

    2018-05-01

    Ultra-sensitive space-borne accelerometers on board of low Earth orbit (LEO) satellites are used to measure non-gravitational forces acting on the surface of these satellites. These forces consist of the Earth radiation pressure, the solar radiation pressure and the atmospheric drag, where the first two are caused by the radiation emitted from the Earth and the Sun, respectively, and the latter is related to the thermospheric density. On-board accelerometer measurements contain systematic errors, which need to be mitigated by applying a calibration before their use in gravity recovery or thermospheric neutral density estimations. Therefore, we improve, apply and compare three calibration procedures: (1) a multi-step numerical estimation approach, which is based on the numerical differentiation of the kinematic orbits of LEO satellites; (2) a calibration of accelerometer observations within the dynamic precise orbit determination procedure and (3) a comparison of observed to modeled forces acting on the surface of LEO satellites. Here, accelerometer measurements obtained by the Gravity Recovery And Climate Experiment (GRACE) are used. Time series of bias and scale factor derived from the three calibration procedures are found to be different in timescales of a few days to months. Results are more similar (statistically significant) when considering longer timescales, from which the results of approach (1) and (2) show better agreement to those of approach (3) during medium and high solar activity. Calibrated accelerometer observations are then applied to estimate thermospheric neutral densities. Differences between accelerometer-based density estimations and those from empirical neutral density models, e.g., NRLMSISE-00, are observed to be significant during quiet periods, on average 22 % of the simulated densities (during low solar activity), and up to 28 % during high solar activity. Therefore, daily corrections are estimated for neutral densities derived from NRLMSISE-00. Our results indicate that these corrections improve model-based density simulations in order to provide density estimates at locations outside the vicinity of the GRACE satellites, in particular during the period of high solar/magnetic activity, e.g., during the St. Patrick's Day storm on 17 March 2015.

  7. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  8. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  9. Asymptotic Behavior of the Stock Price Distribution Density and Implied Volatility in Stochastic Volatility Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulisashvili, Archil, E-mail: guli@math.ohiou.ed; Stein, Elias M., E-mail: stein@math.princeton.ed

    2010-06-15

    We study the asymptotic behavior of distribution densities arising in stock price models with stochastic volatility. The main objects of our interest in the present paper are the density of time averages of the squared volatility process and the density of the stock price process in the Stein-Stein and the Heston model. We find explicit formulas for leading terms in asymptotic expansions of these densities and give error estimates. As an application of our results, sharp asymptotic formulas for the implied volatility in the Stein-Stein and the Heston model are obtained.

  10. Eggshells as an index of aedine mosquito production. 2: Relationship of Aedes taeniorhynchus eggshell density to larval production.

    PubMed

    Addison, D S; Ritchie, S A; Webber, L A; Van Essen, F

    1992-03-01

    To test if eggshell density could be used as an index of aedine mosquito production, we compared eggshell density with the larval production of Aedes taeniorhynchus in Florida mangrove basin forests. Quantitative (n = 7) and categorical (n = 34) estimates of annual larval production were regressed against the number of eggshells per cc of soil. Significant regressions were obtained in both instances. Larval production was concentrated in zones with the highest eggshell density. We suggest that eggshell density and distribution can be used to identify oviposition sites and the sequence of larval appearance.

  11. Productivity and population density estimates of the dengue vector mosquito Aedes aegypti (Stegomyia aegypti) in Australia.

    PubMed

    Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A

    2013-09-01

    New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. © 2012 The Royal Entomological Society.

  12. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David B.

    2012-01-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

  13. An ab-initio investigation on SrLa intermetallic compound

    NASA Astrophysics Data System (ADS)

    Kumar, S. Ramesh; Jaiganesh, G.; Jayalakshmi, V.

    2018-05-01

    The electronic, elastic and thermodynamic property of CsCl-type SrLa are investigated through density functional theory. The energy-volume relation for this compound has been obtained. The band structure, density of states and charge density in (110) plane are also examined. The elastic constants (C11, C12 and C44) of SrLa is computed, then, using these elastic constants, the bulk moduli, shear moduli, Young's moduli and Poisson's ratio are also derived. The calculated results showed that CsCl-type SrLa is ductile at ambient conditions. The thermodynamic quantities such as free energy, entropy and heat capacity as a function of temperature are estimated and the results obtained are discussed.

  14. Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials

    PubMed Central

    Carleton, James B.; D'Amore, Antonio; Feaver, Kristen R.; Rodin, Gregory J.; Sacks, Michael S.

    2014-01-01

    Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. The present work addresses these issues in two ways. First, using methods of geometric probability we develop theoretical estimates for the mean linear and areal fiber intersection densities for two-dimensional fibrous networks. These densities are expressed in terms of the fiber density and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of two-dimensional fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of SEM images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. The methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data. PMID:25311685

  15. Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

    2014-03-01

    Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

  16. Density Measurements of Low Silica CaO-SiO2-Al2O3 Slags

    NASA Astrophysics Data System (ADS)

    Muhmood, Luckman; Seetharaman, Seshadri

    2010-08-01

    Density measurements of a low-silica CaO-SiO2-Al2O3 system were carried out using the Archimedes principle. A Pt 30 pct Rh bob and wire arrangement was used for this purpose. The results obtained were in good agreement with those obtained from the model developed in the current group as well as with other results reported earlier. The density for the CaO-SiO2 and the CaO-Al2O3 binary slag systems also was estimated from the ternary values. The extrapolation of density values for high-silica systems also showed good agreement with previous works. An estimation for the density value of CaO was made from the current experimental data. The density decrease at high temperatures was interpreted based on the silicate structure. As the mole percent of SiO2 was below the 33 pct required for the orthosilicate composition, discrete {text{SiO}}4^{4 - } tetrahedral units in the silicate melt would exist along with O2- ions. The change in melt expansivity may be attributed to the ionic expansions in the order of {text{Al}}^{ 3+ } - {text{O}}^{ 2- } < {text{Ca}}^{ 2+ } - {text{O}}^{ 2- } < {text{Ca}}^{ 2+ } - {text{O}}^{ - } Structural changes in the ternary slag also could be correlated to a drastic change in the value of enthalpy of mixing.

  17. Unbiased estimators for spatial distribution functions of classical fluids

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  18. La Terra: esperimenti a scuola

    NASA Astrophysics Data System (ADS)

    Roselli, Alessandra; D'Amico, Angelalucia; Pisegna, Daniela; Palma, Francesco; di Nardo, Giustino; Cofini, Marika; Cerasani, Paolo; Cerratti, Valentina

    2006-02-01

    Easy but effective methods used in past centuries allow rediscovery and good knowledge of the planet Earth. The latitude station and planetary radius were measured with Eratosthenes method. The gravity acceleration obtained from pendulum period was used to calculate the terrestrial mass and the density of internal planetary layers. Finally, the estimate of atmosphere density and geometrical thickness complete the view of the planet's properties.

  19. Variations of High-Latitude Geomagnetic Pulsation Frequencies: A Comparison of Time-of-Flight Estimates and IMAGE Magnetometer Observations

    NASA Astrophysics Data System (ADS)

    Sandhu, J. K.; Yeoman, T. K.; James, M. K.; Rae, I. J.; Fear, R. C.

    2018-01-01

    The fundamental eigenfrequencies of standing Alfvén waves on closed geomagnetic field lines are estimated for the region spanning 5.9≤L < 9.5 over all MLT (Magnetic Local Time). The T96 magnetic field model and a realistic empirical plasma mass density model are employed using the time-of-flight approximation, refining previous calculations that assumed a relatively simplistic mass density model. An assessment of the implications of using different mass density models in the time-of-flight calculations is presented. The calculated frequencies exhibit dependences on field line footprint magnetic latitude and MLT, which are attributed to both magnetic field configuration and spatial variations in mass density. In order to assess the validity of the time-of-flight calculated frequencies, the estimates are compared to observations of FLR (Field Line Resonance) frequencies. Using IMAGE (International Monitor for Auroral Geomagnetic Effects) ground magnetometer observations obtained between 2001 and 2012, an automated FLR identification method is developed, based on the cross-phase technique. The average FLR frequency is determined, including variations with footprint latitude and MLT, and compared to the time-of-flight analysis. The results show agreement in the latitudinal and local time dependences. Furthermore, with the use of the realistic mass density model in the time-of-flight calculations, closer agreement with the observed FLR frequencies is obtained. The study is limited by the latitudinal coverage of the IMAGE magnetometer array, and future work will aim to extend the ground magnetometer data used to include additional magnetometer arrays.

  20. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  1. Density estimation in a wolverine population using spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin

    2011-01-01

    Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

  2. Measuring atmospheric density using GPS-LEO tracking data

    NASA Astrophysics Data System (ADS)

    Kuang, D.; Desai, S.; Sibthorpe, A.; Pi, X.

    2014-01-01

    We present a method to estimate the total neutral atmospheric density from precise orbit determination of Low Earth Orbit (LEO) satellites. We derive the total atmospheric density by determining the drag force acting on the LEOs through centimeter-level reduced-dynamic precise orbit determination (POD) using onboard Global Positioning System (GPS) tracking data. The precision of the estimated drag accelerations is assessed using various metrics, including differences between estimated along-track accelerations from consecutive 30-h POD solutions which overlap by 6 h, comparison of the resulting accelerations with accelerometer measurements, and comparison against an existing atmospheric density model, DTM-2000. We apply the method to GPS tracking data from CHAMP, GRACE, SAC-C, Jason-2, TerraSAR-X and COSMIC satellites, spanning 12 years (2001-2012) and covering orbital heights from 400 km to 1300 km. Errors in the estimates, including those introduced by deficiencies in other modeled forces (such as solar radiation pressure and Earth radiation pressure), are evaluated and the signal and noise levels for each satellite are analyzed. The estimated density data from CHAMP, GRACE, SAC-C and TerraSAR-X are identified as having high signal and low noise levels. These data all have high correlations with anominal atmospheric density model and show common features in relative residuals with respect to the nominal model in related parameter space. On the contrary, the estimated density data from COSMIC and Jason-2 show errors larger than the actual signal at corresponding altitudes thus having little practical value for this study. The results demonstrate that this method is applicable to data from a variety of missions and can provide useful total neutral density measurements for atmospheric study up to altitude as high as 715 km, with precision and resolution between those derived from traditional special orbital perturbation analysis and those obtained from onboard accelerometers.

  3. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    NASA Astrophysics Data System (ADS)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  4. Trapezium Bone Density-A Comparison of Measurements by DXA and CT.

    PubMed

    Breddam Mosegaard, Sebastian; Breddam Mosegaard, Kamille; Bouteldja, Nadia; Bæk Hansen, Torben; Stilling, Maiken

    2018-01-18

    Bone density may influence the primary fixation of cementless implants, and poor bone density may increase the risk of implant failure. Before deciding on using total joint replacement as treatment in osteoarthritis of the trapeziometacarpal joint, it is valuable to determine the trapezium bone density. The aim of this study was to: (1) determine the correlation between measurements of bone mineral density of the trapezium obtained by dual-energy X-ray absorptiometry (DXA) scans by a circumference method and a new inner-ellipse method; and (2) to compare those to measurements of bone density obtained by computerized tomography (CT)-scans in Hounsfield units (HU). We included 71 hands from 59 patients with a mean age of 59 years (43-77). All patients had Eaton-Glickel stage II-IV trapeziometacarpal (TM) joint osteoarthritis, were under evaluation for trapeziometacarpal total joint replacement, and underwent DXA and CT wrist scans. There was an excellent correlation (r = 0.94) between DXA bone mineral density measures using the circumference and the inner-ellipse method. There was a moderate correlation between bone density measures obtained by DXA- and CT-scans with (r = 0.49) for the circumference method, and (r = 0.55) for the inner-ellipse method. DXA may be used in pre-operative evaluation of the trapezium bone quality, and the simpler DXA inner-ellipse measurement method can replace the DXA circumference method in estimation of bone density of the trapezium.

  5. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    PubMed

    Gupta, Manan; Joshi, Amitabh; Vidya, T N C

    2017-01-01

    Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species.

  6. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators

    PubMed Central

    Joshi, Amitabh; Vidya, T. N. C.

    2017-01-01

    Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species. PMID:28306735

  7. Uncertain Photometric Redshifts with Deep Learning Methods

    NASA Astrophysics Data System (ADS)

    D'Isanto, A.

    2017-06-01

    The need for accurate photometric redshifts estimation is a topic that has fundamental importance in Astronomy, due to the necessity of efficiently obtaining redshift information without the need of spectroscopic analysis. We propose a method for determining accurate multi-modal photo-z probability density functions (PDFs) using Mixture Density Networks (MDN) and Deep Convolutional Networks (DCN). A comparison with a Random Forest (RF) is performed.

  8. Factors Affecting Salamander Density and Distribution within Four Forest Types in Southern Appalachian Mountains

    Treesearch

    Craig A. Harper; David C. Guynn

    1999-01-01

    We used a terrestrial vacuum to sample known area plots in order to obtain density estimates of salamanders and their primary prey, invertebrates of the forest floor. We sampled leaf litter and measured various vegetative and topographic parameters within four forest types (oak-pine, oak-hickory, mixed mesophytic and northern hardwoods) and three age classes (0-12,13-...

  9. Predicting defoliation by the gypsy moth using egg mass counts and a helper variable

    Treesearch

    Michael E. Montgomery

    1991-01-01

    Traditionally, counts of egg masses have been used to predict defoliation by the gypsy moth. Regardless of the method and precision used to obtain the counts, estimates of egg mass density alone often do not provide satisfactory predictions of defoliation. Although defoliation levels greater than 50% are seldom observed if egg mass densities are less than 600 per...

  10. Estimation of Ice Surface Scattering and Acoustic Attenuation in Arctic Sediments from Long-Range Propagation Data

    DTIC Science & Technology

    1984-01-01

    frequencies over the calculation at 1 meter. This was corrected by +60 dB to obtain the signature at I meter, and then by -160 dB to obtain the voltage...FFT with appropriate corrections for one-sided energy spectral density re . 1 V2/Hz. The spectrum was then smeared over a 4 Hz band by a running...after correcting the array estimated slownesses for slight bathymetric dip local to the receiving array. A preliminary inversion of this type is given by

  11. Determining Core Plasmaspheric Electron Densities with the Van Allen Probes

    NASA Astrophysics Data System (ADS)

    De Pascuale, S.; Hartley, D.; Kurth, W. S.; Kletzing, C.; Thaller, S. A.; Wygant, J. R.

    2016-12-01

    We survey three methods for obtaining electron densities inside of the core plasmasphere region (L < 4) to the perigee of the Van Allen Probes (L 1.1) from September 2012 to December 2014. Using the EMFISIS instrument on board the Van Allen Probes, electron densities are extracted from the upper hybrid resonance to an uncertainty of 10%. Some measurements are subject to larger errors given interpretational issues, especially at low densities (L > 4) resulting from geomagnetic activity. At high densities EMFISIS is restricted by an upper observable limit near 3000 cm-3. As this limit is encountered above perigee, we employ two additional methods validated against EMFISIS measurements to determine electron densities deep within the plasmasphere (L < 2). EMFISIS can extrapolate density estimates to lower L by calculating high densities, in good agreement with the upper hybrid technique when applicable, from plasma wave properties. Calibrated measurements, from the Van Allen Probes EFW potential instrument, also extend into this range. In comparison with the published EMFISIS database we provide a metric for the validity of core plasmaspheric density measurements obtained from these methods and an empirical density model for use in wave and particle simulations.

  12. Thermospheric density estimation from SLR observations of LEO satellites - A case study with the ANDE-Pollux satellite

    NASA Astrophysics Data System (ADS)

    Blossfeld, M.; Schmidt, M.; Erdogan, E.

    2016-12-01

    The thermospheric neutral density plays a crucial role within the equation of motion of Earth orbiting objects since drag, lift or side forces are one of the largest non-gravitational perturbations acting on the satellite. Precise Orbit Determination (POD) methods can be used to estimate thermospheric density variations from measured orbit determinations. One method which provides highly accurate measurements of the satellite position is Satellite Laser Ranging (SLR). Within the POD process, scaling factors are estimated frequently. These scaling factors can be either used for the scaling of the so called satellite-specific drag (ballistic) coefficients or the integrated thermospheric neutral density. We present a method for analytically model the drag coefficient based on a couple of physical assumptions and key parameters. In this paper, we investigate the possibility to use SLR observations to the very low Earth orbiting satellite ANDE-Pollux (approximately at 350km altitude) to determine scaling factors for different a priori thermospheric density models. We perform a POD for ANDE-Pollux covering 49 days between August 2009 and September 2009 which means the time span containing the largest number of observations during the short lifetime of the satellite. Finally, we compare the obtained scaled thermospheric densities w.r.t. each other

  13. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    PubMed

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Remote sensing estimation of isoprene and monoterpene emissions generated by natural vegetation in Monterrey, Mexico.

    PubMed

    Gastelum, Sandra L; Mejía-Velázquez, G M; Lozano-García, D Fabián

    2016-06-01

    In addition to oxygen, hydrocarbons are the most reactive chemical compounds produced by plants into the atmosphere. These compounds are part of the family of volatile organic compounds (VOCs) and are discharged in a great variety of forms. Among the VOCs produced by natural sources such as vegetation, the most studied until today are the isoprene and monoterpene. These substances can play an important role in the chemical balance of the atmosphere of a region. In this project, we develop a methodology to estimate the natural (vegetation) emission of isoprene and monoterpenes and applied it to the Monterrey Metropolitan Area, Mexico and its surrounding areas. Landsat-TM data was used to identify the dominant vegetation communities and field work to determine the foliage biomass density of key species. The studied communities were submontane scrub, oak, and pine forests and a combination of both. We carried out the estimation of emissions for isoprene and monoterpenes compounds in the different plant communities, with two different criteria: (1) taking into account the average foliage biomass density obtained from the various sample point in each vegetation community, and (2) using the foliage biomass density obtained for each transect, associated to an individual spectral class within a particular vegetation type. With this information, we obtained emission maps for each case. The results show that the main producers of isoprene are the communities that include species of the genus Quercus, located mainly on the Sierra Madre Oriental and Sierra de Picachos, with average isoprene emissions of 314.6 ton/day and 207.3 ton/day for the two methods utilized. The higher estimates of monoterpenes were found in the submontane scrub areas distributed along the valley of the metropolitan zone, with an estimated average emissions of 47.1 ton/day and 181.4 tons for the two methods respectively.

  15. Forest descriptions and photographs of forested areas along the breaks of the Missouri River in eastern Montana, USA

    Treesearch

    Theresa B. Jain; Molly Juillerat; Jonathan Sandquist; Brad Sauer; Robert Mitchell; Scott McAvoy; Justin Hanley; John David

    2007-01-01

    This handbook presents information and photographs obtained from forest lands along the breaks of the Missouri River in eastern Montana. Forest characteristics summarized in tables with accompanying photographs can be used to provide quick estimates of species composition and densities within similar landscape features. These estimates may be useful to foresters,...

  16. Processing Satellite Data for Slant Total Electron Content Measurements

    NASA Technical Reports Server (NTRS)

    Stephens, Philip John (Inventor); Komjathy, Attila (Inventor); Wilson, Brian D. (Inventor); Mannucci, Anthony J. (Inventor)

    2016-01-01

    A method, system, and apparatus provide the ability to estimate ionospheric observables using space-borne observations. Space-borne global positioning system (GPS) data of ionospheric delay are obtained from a satellite. The space-borne GPS data are combined with ground-based GPS observations. The combination is utilized in a model to estimate a global three-dimensional (3D) electron density field.

  17. Oxidation of gallium arsenide in a plasma multipole device. Study of the MOS structures obtained

    NASA Technical Reports Server (NTRS)

    Gourrier, S.; Mircea, A.; Simondet, F.

    1980-01-01

    The oxygen plasma oxidation of GaAs was studied in order to obtain extremely high frequency responses with MOS devices. In the multipole system a homogeneous oxygen plasma of high density can easily be obtained in a large volume. This system is thus convenient for the study of plasma oxidation of GaAs. The electrical properties of the MOS diodes obtained in this way are controlled by interface states, located mostly in the upper half of the band gap where densities in the 10 to the 13th power/(sq cm) (eV) range can be estimated. Despite these interface states the possibility of fabricating MOSFET transistors working mostly in the depletion mode for a higher frequency cut-off still exists.

  18. Pre-Bombing Population Density in Hiroshima and Nagasaki: Its Measurement and Impact on Radiation Risk Estimates in the Life Span Study of Atomic Bomb Survivors.

    PubMed

    French, Benjamin; Funamoto, Sachiyo; Sugiyama, Hiromi; Sakata, Ritsu; Cologne, John; Cullings, Harry M; Mabuchi, Kiyohiko; Preston, Dale L

    2018-03-29

    In the Life Span Study of atomic bomb survivors, differences in urbanicity between high-dose and low-dose survivors could confound the association between radiation dose and adverse outcomes. We obtained data on the pre-bombing population distribution in Hiroshima and Nagasaki, and quantified the impact of adjustment for population density on radiation risk estimates for mortality (1950-2003) and incident solid cancer (1958-2009). Population density ranged from 4,671-14,378 and 5,748-19,149 people/km2 in urban regions of Hiroshima and Nagasaki, respectively. Radiation risk estimates for solid cancer mortality were attenuated by 5.1%, but those for all-cause mortality and incident solid cancer were unchanged. There was no overall association between population density and adverse outcomes, but there was evidence that the association between density and mortality differed by age at exposure. Among survivors 10-14 years old in 1945, there was a positive association between population density and risk of all-cause mortality (relative risk, 1.053 per 5,000 people/km2 increase, 95% confidence interval: 1.027, 1.079) and solid cancer mortality (relative risk, 1.069 per 5,000 people/km2 increase, 95% confidence interval: 1.025, 1.115). Our results suggest that radiation risk estimates from the Life Span Study are not sensitive to unmeasured confounding by urban-rural differences.

  19. Estimation of Enthalpy of Formation of Liquid Transition Metal Alloys: A Modified Prescription Based on Macroscopic Atom Model of Cohesion

    NASA Astrophysics Data System (ADS)

    Raju, Subramanian; Saibaba, Saroja

    2016-09-01

    The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.

  20. Accretion Rates for T Tauri Stars Using Nearly Simultaneous Ultraviolet and Optical Spectra

    NASA Astrophysics Data System (ADS)

    Ingleby, Laura; Calvet, Nuria; Herczeg, Gregory; Blaty, Alex; Walter, Frederick; Ardila, David; Alexander, Richard; Edwards, Suzan; Espaillat, Catherine; Gregory, Scott G.; Hillenbrand, Lynne; Brown, Alexander

    2013-04-01

    We analyze the accretion properties of 21 low-mass T Tauri stars using a data set of contemporaneous near-UV (NUV) through optical observations obtained with the Hubble Space Telescope Imaging Spectrograph and the ground-based Small and Medium Aperture Research Telescope System, a unique data set because of the nearly simultaneous broad wavelength coverage. Our data set includes accreting T Tauri stars in Taurus, Chamaeleon I, η Chamaeleon, and the TW Hydra Association. For each source we calculate the accretion rate (\\dot{M}) by fitting the NUV and optical excesses above the photosphere, produced in the accretion shock, introducing multiple accretion components characterized by a range in energy flux (or density) for the first time. This treatment is motivated by models of the magnetospheric geometry and accretion footprints, which predict that high-density, low filling factor accretion spots coexist with low-density, high filling factor spots. By fitting the UV and optical spectra with multiple accretion components, we can explain excesses which have been observed in the near-IR. Comparing our estimates of \\dot{M} to previous estimates, we find some discrepancies; however, they may be accounted for when considering assumptions for the amount of extinction and variability in optical spectra. Therefore, we confirm many previous estimates of the accretion rate. Finally, we measure emission line luminosities from the same spectra used for the \\dot{M} estimates, to produce correlations between accretion indicators (Hβ, Ca II K, C II], and Mg II) and accretion properties obtained simultaneously.

  1. Dielectric properties of organic solvents from non-polarizable molecular dynamics simulation with electronic continuum model and density functional theory.

    PubMed

    Lee, Sanghun; Park, Sung Soo

    2011-11-03

    Dielectric constants of electrolytic organic solvents are calculated employing nonpolarizable Molecular Dynamics simulation with Electronic Continuum (MDEC) model and Density Functional Theory. The molecular polarizabilities are obtained by the B3LYP/6-311++G(d,p) level of theory to estimate high-frequency refractive indices while the densities and dipole moment fluctuations are computed using nonpolarizable MD simulations. The dielectric constants reproduced from these procedures are evaluated to provide a reliable approach for estimating the experimental data. An additional feature, two representative solvents which have similar molecular weights but are different dielectric properties, i.e., ethyl methyl carbonate and propylene carbonate, are compared using MD simulations and the distinctly different dielectric behaviors are observed at short times as well as at long times.

  2. Hölder Regularity of the 2D Dual Semigeostrophic Equations via Analysis of Linearized Monge-Ampère Equations

    NASA Astrophysics Data System (ADS)

    Le, Nam Q.

    2018-05-01

    We obtain the Hölder regularity of time derivative of solutions to the dual semigeostrophic equations in two dimensions when the initial potential density is bounded away from zero and infinity. Our main tool is an interior Hölder estimate in two dimensions for an inhomogeneous linearized Monge-Ampère equation with right hand side being the divergence of a bounded vector field. As a further application of our Hölder estimate, we prove the Hölder regularity of the polar factorization for time-dependent maps in two dimensions with densities bounded away from zero and infinity. Our applications improve previous work by G. Loeper who considered the cases of densities sufficiently close to a positive constant.

  3. Ore Reserve Estimation of Saprolite Nickel Using Inverse Distance Method in PIT Block 3A Banggai Area Central Sulawesi

    NASA Astrophysics Data System (ADS)

    Khaidir Noor, Muhammad

    2018-03-01

    Reserve estimation is one of important work in evaluating a mining project. It is estimation of the quality and quantity of the presence of minerals have economic value. Reserve calculation method plays an important role in determining the efficiency in commercial exploration of a deposit. This study was intended to calculate ore reserves contained in the study area especially Pit Block 3A. Nickel ore reserve was estimated by using detailed exploration data, processing by using Surpac 6.2 by Inverse Distance Weight: Squared Power estimation method. Ore estimation result obtained from 30 drilling data was 76453.5 ton of Saprolite with density of 1.5 ton/m3 and COG (Cut Off Grade) Ni ≥ 1.6 %, while overburden data was 112,570.8 tons with waste rock density of 1.2 ton/m3 . Striping Ratio (SR) was 1.47 : 1 smaller than Stripping Ratio ( SR ) were set of 1.60 : 1.

  4. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  5. Reader variability in breast density estimation from full-field digital mammograms: the effect of image postprocessing on relative and absolute measures.

    PubMed

    Keller, Brad M; Nathan, Diane L; Gavenonis, Sara C; Chen, Jinbo; Conant, Emily F; Kontos, Despina

    2013-05-01

    Mammographic breast density, a strong risk factor for breast cancer, may be measured as either a relative percentage of dense (ie, radiopaque) breast tissue or as an absolute area from either raw (ie, "for processing") or vendor postprocessed (ie, "for presentation") digital mammograms. Given the increasing interest in the incorporation of mammographic density in breast cancer risk assessment, the purpose of this study is to determine the inherent reader variability in breast density assessment from raw and vendor-processed digital mammograms, because inconsistent estimates could to lead to misclassification of an individual woman's risk for breast cancer. Bilateral, mediolateral-oblique view, raw, and processed digital mammograms of 81 women were retrospectively collected for this study (N = 324 images). Mammographic percent density and absolute dense tissue area estimates for each image were obtained from two radiologists using a validated, interactive software tool. The variability of interreader agreement was not found to be affected by the image presentation style (ie, raw or processed, F-test: P > .5). Interreader estimates of relative and absolute breast density are strongly correlated (Pearson r > 0.84, P < .001) but systematically different (t-test, P < .001) between the two readers. Our results show that mammographic density may be assessed with equal reliability from either raw or vendor postprocessed images. Furthermore, our results suggest that the primary source of density variability comes from the subjectivity of the individual reader in assessing the absolute amount of dense tissue present in the breast, indicating the need to use standardized tools to mitigate this effect. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  6. Detectability of auditory signals presented without defined observation intervals

    NASA Technical Reports Server (NTRS)

    Watson, C. S.; Nichols, T. L.

    1976-01-01

    Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

  7. Measurements of surface-pressure fluctuations on the XB-70 airplane at local Mach numbers up to 2.45

    NASA Technical Reports Server (NTRS)

    Lewis, T. L.; Dods, J. B., Jr.; Hanly, R. D.

    1973-01-01

    Measurements of surface-pressure fluctuations were made at two locations on the XB-70 airplane for nine flight-test conditions encompassing a local Mach number range from 0.35 to 2.45. These measurements are presented in the form of estimated power spectral densities, coherence functions, and narrow-band-convection velocities. The estimated power spectral densities compared favorably with wind-tunnel data obtained by other experimenters. The coherence function and convection velocity data supported conclusions by other experimenters that low-frequency surface-pressure fluctuations consist of small-scale turbulence components with low convection velocity.

  8. Characteristics of dust voids in a strongly coupled laboratory dusty plasma

    NASA Astrophysics Data System (ADS)

    Bailung, Yoshiko; Deka, T.; Boruah, A.; Sharma, S. K.; Pal, A. R.; Chutia, Joyanti; Bailung, H.

    2018-05-01

    A void is produced in a strongly coupled dusty plasma by inserting a cylindrical pin (˜0.1 mm diameter) into a radiofrequency discharge argon plasma. The pin is biased externally below the plasma potential to generate the dust void. The Debye sheath model is used to obtain the sheath potential profile and hence to estimate the electric field around the pin. The electric field force and the ion drag force on the dust particles are estimated and their balance accounts well for the maintenance of the size of the void. The effects of neutral density as well as dust density on the void size are studied.

  9. On the mean radiative efficiency of accreting massive black holes in AGNs and QSOs

    NASA Astrophysics Data System (ADS)

    Zhang, XiaoXia; Lu, YouJun

    2017-10-01

    Radiative efficiency is an important physical parameter that describes the fraction of accretion material converted to radiative energy for accretion onto massive black holes (MBHs). With the simplest Sołtan argument, the radiative efficiency of MBHs can be estimated by matching the mass density of MBHs in the local universe to the accreted mass density by MBHs during AGN/QSO phases. In this paper, we estimate the local MBH mass density through a combination of various determinations of the correlations between the masses of MBHs and the properties of MBH host galaxies, with the distribution functions of those galaxy properties. We also estimate the total energy density radiated by AGNs and QSOs by using various AGN/QSO X-ray luminosity functions in the literature. We then obtain several hundred estimates of the mean radiative efficiency of AGNs/QSOs. Under the assumption that those estimates are independent of each other and free of systematic effects, we apply the median statistics as described by Gott et al. and find the mean radiative efficiency of AGNs/QSOs is ɛ = 0.105 -0.008 +0.006 , which is consistent with the canonical value 0.1. Considering that about 20% Compton-thick objects may be missed from current available X-ray surveys, the true mean radiative efficiency may be actually 0.12.

  10. Computer-assisted stereology and automated image analysis for quantification of tumor infiltrating lymphocytes in colon cancer.

    PubMed

    Eriksen, Ann C; Andersen, Johnnie B; Kristensson, Martin; dePont Christensen, René; Hansen, Torben F; Kjær-Frifeldt, Sanne; Sørensen, Flemming B

    2017-08-29

    Precise prognostic and predictive variables allowing improved post-operative treatment stratification are missing in patients treated for stage II colon cancer (CC). Investigation of tumor infiltrating lymphocytes (TILs) may be rewarding, but the lack of a standardized analytic technique is a major concern. Manual stereological counting is considered the gold standard, but digital pathology with image analysis is preferred due to time efficiency. The purpose of this study was to compare manual stereological estimates of TILs with automatic counts obtained by image analysis, and at the same time investigate the heterogeneity of TILs. From 43 patients treated for stage II CC in 2002 three paraffin embedded, tumor containing tissue blocks were selected one of them representing the deepest invasive tumor front. Serial sections from each of the 129 blocks were immunohistochemically stained for CD3 and CD8, and the slides were scanned. Stereological estimates of the numerical density and area fraction of TILs were obtained using the computer-assisted newCAST stereology system. For the image analysis approach an app-based algorithm was developed using Visiopharm Integrator System software. For both methods the tumor areas of interest (invasive front and central area) were manually delineated by the observer. Based on all sections, the Spearman's correlation coefficients for density estimates varied from 0.9457 to 0.9638 (p < 0.0001), whereas the coefficients for area fraction estimates ranged from 0.9400 to 0.9603 (P < 0.0001). Regarding heterogeneity, intra-class correlation coefficients (ICC) for CD3+ TILs varied from 0.615 to 0.746 in the central area, and from 0.686 to 0.746 in the invasive area. ICC for CD8+ TILs varied from 0.724 to 0.775 in the central area, and from 0.746 to 0.765 in the invasive area. Exact objective and time efficient estimates of numerical densities and area fractions of CD3+ and CD8+ TILs in stage II colon cancer can be obtained by image analysis and are highly correlated to the corresponding estimates obtained by the gold standard based on stereology. Since the intra-tumoral heterogeneity was low, this method may be recommended for quantifying TILs in only one histological section representing the deepest invasive tumor front.

  11. Estimating neuronal connectivity from axonal and dendritic density fields

    PubMed Central

    van Pelt, Jaap; van Ooyen, Arjen

    2013-01-01

    Neurons innervate space by extending axonal and dendritic arborizations. When axons and dendrites come in close proximity of each other, synapses between neurons can be formed. Neurons vary greatly in their morphologies and synaptic connections with other neurons. The size and shape of the arborizations determine the way neurons innervate space. A neuron may therefore be characterized by the spatial distribution of its axonal and dendritic “mass.” A population mean “mass” density field of a particular neuron type can be obtained by averaging over the individual variations in neuron geometries. Connectivity in terms of candidate synaptic contacts between neurons can be determined directly on the basis of their arborizations but also indirectly on the basis of their density fields. To decide when a candidate synapse can be formed, we previously developed a criterion defining that axonal and dendritic line pieces should cross in 3D and have an orthogonal distance less than a threshold value. In this paper, we developed new methodology for applying this criterion to density fields. We show that estimates of the number of contacts between neuron pairs calculated from their density fields are fully consistent with the number of contacts calculated from the actual arborizations. However, the estimation of the connection probability and the expected number of contacts per connection cannot be calculated directly from density fields, because density fields do not carry anymore the correlative structure in the spatial distribution of synaptic contacts. Alternatively, these two connectivity measures can be estimated from the expected number of contacts by using empirical mapping functions. The neurons used for the validation studies were generated by our neuron simulator NETMORPH. An example is given of the estimation of average connectivity and Euclidean pre- and postsynaptic distance distributions in a network of neurons represented by their population mean density fields. PMID:24324430

  12. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  13. Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.

    PubMed

    Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor

    2015-11-01

    Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. A cost-efficient method to assess carbon stocks in tropical peat soil

    NASA Astrophysics Data System (ADS)

    Warren, M. W.; Kauffman, J. B.; Murdiyarso, D.; Anshari, G.; Hergoualc'h, K.; Kurnianto, S.; Purbopuspito, J.; Gusmayanti, E.; Afifudin, M.; Rahajoe, J.; Alhamd, L.; Limin, S.; Iswandi, A.

    2012-11-01

    Estimation of belowground carbon stocks in tropical wetland forests requires funding for laboratory analyses and suitable facilities, which are often lacking in developing nations where most tropical wetlands are found. It is therefore beneficial to develop simple analytical tools to assist belowground carbon estimation where financial and technical limitations are common. Here we use published and original data to describe soil carbon density (kgC m-3; Cd) as a function of bulk density (gC cm-3; Bd), which can be used to rapidly estimate belowground carbon storage using Bd measurements only. Predicted carbon densities and stocks are compared with those obtained from direct carbon analysis for ten peat swamp forest stands in three national parks of Indonesia. Analysis of soil carbon density and bulk density from the literature indicated a strong linear relationship (Cd = Bd × 495.14 + 5.41, R2 = 0.93, n = 151) for soils with organic C content > 40%. As organic C content decreases, the relationship between Cd and Bd becomes less predictable as soil texture becomes an important determinant of Cd. The equation predicted belowground C stocks to within 0.92% to 9.57% of observed values. Average bulk density of collected peat samples was 0.127 g cm-3, which is in the upper range of previous reports for Southeast Asian peatlands. When original data were included, the revised equation Cd = Bd × 468.76 + 5.82, with R2 = 0.95 and n = 712, was slightly below the lower 95% confidence interval of the original equation, and tended to decrease Cd estimates. We recommend this last equation for a rapid estimation of soil C stocks for well-developed peat soils where C content > 40%.

  15. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  16. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  17. Density and Biomass Estimates by Removal for an Amazonian Crocodilian, Paleosuchus palpebrosus.

    PubMed

    Campos, Zilca; Magnusson, William E

    2016-01-01

    Direct counts of crocodilians are rarely feasible and it is difficult to meet the assumptions of mark-recapture methods for most species in most habitats. Catch-out experiments are also usually not logistically or morally justifiable because it would be necessary to destroy the habitat in order to be confident that most individuals had been captured. We took advantage of the draining and filling of a large area of flooded forest during the building of the Santo Antônio dam on the Madeira River to obtain accurate estimates of the density and biomass of Paleosuchus palpebrosus. The density, 28.4 non-hatchling individuals per km2, is one of the highest reported for any crocodilian, except for species that are temporarily concentrated in small areas during dry-season drought. The biomass estimate of 63.15 kg*km-2 is higher than that for most or even all mammalian carnivores in tropical forest. P. palpebrosus may be one of the World´s most abundant crocodilians.

  18. Hybrid asymptotic-numerical approach for estimating first-passage-time densities of the two-dimensional narrow capture problem.

    PubMed

    Lindsay, A E; Spoonmore, R T; Tzou, J C

    2016-10-01

    A hybrid asymptotic-numerical method is presented for obtaining an asymptotic estimate for the full probability distribution of capture times of a random walker by multiple small traps located inside a bounded two-dimensional domain with a reflecting boundary. As motivation for this study, we calculate the variance in the capture time of a random walker by a single interior trap and determine this quantity to be comparable in magnitude to the mean. This implies that the mean is not necessarily reflective of typical capture times and that the full density must be determined. To solve the underlying diffusion equation, the method of Laplace transforms is used to obtain an elliptic problem of modified Helmholtz type. In the limit of vanishing trap sizes, each trap is represented as a Dirac point source that permits the solution of the transform equation to be represented as a superposition of Helmholtz Green's functions. Using this solution, we construct asymptotic short-time solutions of the first-passage-time density, which captures peaks associated with rapid capture by the absorbing traps. When numerical evaluation of the Helmholtz Green's function is employed followed by numerical inversion of the Laplace transform, the method reproduces the density for larger times. We demonstrate the accuracy of our solution technique with a comparison to statistics obtained from a time-dependent solution of the diffusion equation and discrete particle simulations. In particular, we demonstrate that the method is capable of capturing the multimodal behavior in the capture time density that arises when the traps are strategically arranged. The hybrid method presented can be applied to scenarios involving both arbitrary domains and trap shapes.

  19. Characterizing Complexity of Containerized Cargo X-ray Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Guangxing; Martz, Harry; Glenn, Steven

    X-ray imaging can be used to inspect cargos imported into the United States. In order to better understand the performance of X-ray inspection systems, the X-ray characteristics (density, complexity) of cargo need to be quantified. In this project, an image complexity measure called integrated power spectral density (IPSD) was studied using both DNDO engineered cargos and stream-of-commerce (SOC) cargos. A joint distribution of cargo density and complexity was obtained. A support vector machine was used to classify the SOC cargos into four categories to estimate the relative fractions.

  20. Density-dependent host choice by disease vectors: epidemiological implications of the ideal free distribution.

    PubMed

    Basáñez, María-Gloria; Razali, Karina; Renz, Alfons; Kelly, David

    2007-03-01

    The proportion of vector blood meals taken on humans (the human blood index, h) appears as a squared term in classical expressions of the basic reproduction ratio (R(0)) for vector-borne infections. Consequently, R(0) varies non-linearly with h. Estimates of h, however, constitute mere snapshots of a parameter that is predicted, from evolutionary theory, to vary with vector and host abundance. We test this prediction using a population dynamics model of river blindness assuming that, before initiation of vector control or chemotherapy, recorded measures of vector density and human infection accurately represent endemic equilibrium. We obtain values of h that satisfy the condition that the effective reproduction ratio (R(e)) must equal 1 at equilibrium. Values of h thus obtained decrease with vector density, decrease with the vector:human ratio and make R(0) respond non-linearly rather than increase linearly with vector density. We conclude that if vectors are less able to obtain human blood meals as their density increases, antivectorial measures may not lead to proportional reductions in R(0) until very low vector levels are achieved. Density dependence in the contact rate of infectious diseases transmitted by insects may be an important non-linear process with implications for their epidemiology and control.

  1. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  2. Forest Canopy Cover and Height from MISR in Topographically Complex Southwestern US Landscape Assessed with High Quality Reference Data

    NASA Technical Reports Server (NTRS)

    Chopping, Mark; North, Malcolm; Chen, Jiquan; Schaaf, Crystal B.; Blair, J. Bryan; Martonchik, John V.; Bull, Michael A.

    2012-01-01

    This study addresses the retrieval of spatially contiguous canopy cover and height estimates in southwestern USforests via inversion of a geometric-optical (GO) model against surface bidirectional reflectance factor (BRF) estimates from the Multi-angle Imaging SpectroRadiometer (MISR). Model inversion can provide such maps if good estimates of the background bidirectional reflectance distribution function (BRDF) are available. The study area is in the Sierra National Forest in the Sierra Nevada of California. Tree number density, mean crown radius, and fractional cover reference estimates were obtained via analysis of QuickBird 0.6 m spatial resolution panchromatic imagery usingthe CANopy Analysis with Panchromatic Imagery (CANAPI) algorithm, while RH50, RH75 and RH100 (50, 75, and 100 energy return) height data were obtained from the NASA Laser Vegetation Imaging Sensor (LVIS), a full waveform light detection and ranging (lidar) instrument. These canopy parameters were used to drive a modified version of the simple GO model (SGM), accurately reproducing patterns ofMISR 672 nm band surface reflectance (mean RMSE 0.011, mean R2 0.82, N 1048). Cover and height maps were obtained through model inversion against MISR 672 nm reflectance estimates on a 250 m grid.The free parameters were tree number density and mean crown radius. RMSE values with respect to reference data for the cover and height retrievals were 0.05 and 6.65 m, respectively, with of 0.54 and 0.49. MISR can thus provide maps of forest cover and height in areas of topographic variation although refinements are required to improve retrieval precision.

  3. A citizen science based survey method for estimating the density of urban carnivores.

    PubMed

    Scott, Dawn M; Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W; Mill, Aileen C; Smith, Graham C; Tolhurst, Bryony A

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980's. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on species traits meeting particular criteria, and on resident responsiveness.

  4. A citizen science based survey method for estimating the density of urban carnivores

    PubMed Central

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on species traits meeting particular criteria, and on resident responsiveness. PMID:29787598

  5. Radar Investigations of Asteroids

    NASA Technical Reports Server (NTRS)

    Ostro, S. J.

    1984-01-01

    Radar investigations of asteroids, including observations during 1984 to 1985 of at least 8 potential targets and continued analyses of radar data obtained during 1980 to 1984 for 30 other asteroids is proposed. The primary scientific objectives include estimation of echo strength, polarization, spectral shape, spectral bandwidth, and Doppler shift. These measurements yield estimates of target size, shape, and spin vector; place constraints on topography, morphology, density, and composition of the planetary surface; yield refined estimates of target orbital parameters; and reveals the presence of asteroidal satellites.

  6. A Fourier approach to cloud motion estimation

    NASA Technical Reports Server (NTRS)

    Arking, A.; Lo, R. C.; Rosenfield, A.

    1977-01-01

    A Fourier technique is described for estimating cloud motion from pairs of pictures using the phase of the cross spectral density. The method allows motion estimates to be made for individual spatial frequencies, which are related to cloud pattern dimensions. Results obtained are presented and compared with the results of a Fourier domain cross correlation scheme. Using both artificial and real cloud data show that the technique is relatively sensitive to the presence of mixtures of motions, changes in cloud shape, and edge effects.

  7. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  8. Estimation of option-implied risk-neutral into real-world density by using calibration function

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-04-01

    Option prices contain crucial information that can be used as a reflection of future development of an underlying assets' price. The main objective of this study is to extract the risk-neutral density (RND) and the risk-world density (RWD) of option prices. A volatility function technique is applied by using a fourth order polynomial interpolation to obtain the RNDs. Then, a calibration function is used to convert the RNDs into RWDs. There are two types of calibration function which are parametric and non-parametric calibrations. The density is extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity from January 2009 until December 2015. The performance of RNDs and RWDs extracted are evaluated by using a density forecasting test. This study found out that the RWDs obtain can provide an accurate information regarding the price of the underlying asset in future compared to that of the RNDs. In addition, empirical evidence suggests that RWDs from a non-parametric calibration has a better accuracy than other densities.

  9. Density conversion factor determined using a cone-beam computed tomography unit NewTom QR-DVT 9000.

    PubMed

    Lagravère, M O; Fang, Y; Carey, J; Toogood, R W; Packota, G V; Major, P W

    2006-11-01

    The purpose of this study was to determine a conversion coefficient for Hounsfield Units (HU) to material density (g cm(-3)) obtained from cone-beam computed tomography (CBCT-NewTom QR-DVT 9000) data. Six cylindrical models of materials with different densities were made and scanned using the NewTom QR-DVT 9000 Volume Scanner. The raw data were converted into DICOM format and analysed using Merge eFilm and AMIRA to determine the HU of different areas of the models. There was no significant difference (P = 0.846) between the HU given by each piece of software. A linear regression was performed using the density, rho (g cm(-3)), as the dependent variable in terms of the HU (H). The regression equation obtained was rho = 0.002H-0.381 with an R2 value of 0.986. The standard error of the estimation is 27.104 HU in the case of the Hounsfield Units and 0.064 g cm(-3) in the case of density. CBCT provides an effective option for determination of material density expressed as Hounsfield Units.

  10. Bone mass density estimation: Archimede’s principle versus automatic X-ray histogram and edge detection technique in ovariectomized rats treated with germinated brown rice bioactives

    PubMed Central

    Muhammad, Sani Ismaila; Maznah, Ismail; Mahmud, Rozi Binti; Esmaile, Maher Faik; Zuki, Abu Bakar Zakaria

    2013-01-01

    Background Bone mass density is an important parameter used in the estimation of the severity and depth of lesions in osteoporosis. Estimation of bone density using existing methods in experimental models has its advantages as well as drawbacks. Materials and methods In this study, the X-ray histogram edge detection technique was used to estimate the bone mass density in ovariectomized rats treated orally with germinated brown rice (GBR) bioactives, and the results were compared with estimated results obtained using Archimede’s principle. New bone cell proliferation was assessed by histology and immunohistochemical reaction using polyclonal nuclear antigen. Additionally, serum alkaline phosphatase activity, serum and bone calcium and zinc concentrations were detected using a chemistry analyzer and atomic absorption spectroscopy. Rats were divided into groups of six as follows: sham (nonovariectomized, nontreated); ovariectomized, nontreated; and ovariectomized and treated with estrogen, or Remifemin®, GBR-phenolics, acylated steryl glucosides, gamma oryzanol, and gamma amino-butyric acid extracted from GBR at different doses. Results Our results indicate a significant increase in alkaline phosphatase activity, serum and bone calcium, and zinc and ash content in the treated groups compared with the ovariectomized nontreated group (P < 0.05). Bone density increased significantly (P < 0.05) in groups treated with estrogen, GBR, Remifemin®, and gamma oryzanol compared to the ovariectomized nontreated group. Histological sections revealed more osteoblasts in the treated groups when compared with the untreated groups. A polyclonal nuclear antigen reaction showing proliferating new cells was observed in groups treated with estrogen, Remifemin®, GBR, acylated steryl glucosides, and gamma oryzanol. There was a good correlation between bone mass densities estimated using Archimede’s principle and the edge detection technique between the treated groups (r2 = 0.737, P = 0.004). Conclusion Our study shows that GBR bioactives increase bone density, which might be via the activation of zinc formation and increased calcium content, and that X-ray edge detection technique is effective in the measurement of bone density and can be employed effectively in this respect. PMID:24187491

  11. Using CTX Image Features to Predict HiRISE-Equivalent Rock Density

    NASA Technical Reports Server (NTRS)

    Serrano, Navid; Huertas, Andres; McGuire, Patrick; Mayer, David; Ardvidson, Raymond

    2010-01-01

    Methods have been developed to quantitatively assess rock hazards at candidate landing sites with the aid of images from the HiRISE camera onboard NASA s Mars Reconnaissance Orbiter. HiRISE is able to resolve rocks as small as 1-m in diameter. Some sites of interest do not have adequate coverage with the highest resolution sensors and there is a need to infer relevant information (like site safety or underlying geomorphology). The proposed approach would make it possible to obtain rock density estimates at a level close to or equal to those obtained from high-resolution sensors where individual rocks are discernable.

  12. Measurement of lung function using Electrical Impedance Tomography (EIT) during mechanical ventilation

    NASA Astrophysics Data System (ADS)

    Nebuya, Satoru; Koike, Tomotaka; Imai, Hiroshi; Noshiro, Makoto; Brown, Brian H.; Soma, Kazui

    2010-04-01

    The consistency of regional lung density measurements as estimated by Electrical Impedance Tomography (EIT), in eleven patients supported by a mechanical ventilator, was validated to verify the feasibility of its use in intensive care medicine. There were significant differences in regional lung densities between the normal lung and diseased lungs associated with pneumonia, atelectasis and pleural effusion (Steel-Dwass test, p < 0.05). Temporal changes in regional lung density of patients with atelectasis were observed to be in good agreement with the results of clinical diagnosis. These results indicate that it is feasible to obtain a quantitative value for regional lung density using EIT.

  13. LINDENS: A program for lineament length and density analysis*1

    NASA Astrophysics Data System (ADS)

    Casas, Antonio M.; Cortés, Angel L.; Maestro, Adolfo; Soriano, M. Asunción; Riaguas, Andres; Bernal, Javier

    2000-11-01

    Analysis of lineaments from satellite images normally includes the determination of their orientation and density. The spatial variation in the orientation and/or number of lineaments must be obtained by means of a network of cells, the lineaments included in each cell being analysed separately. The program presented in this work, LINDENS, allows the density of lineaments (number of lineaments per km 2 and length of lineaments per km 2) to be estimated. It also provides a tool for classifying the lineaments contained in different cells, so that their orientation can be represented in frequency histograms and/or rose diagrams. The input file must contain the planar coordinates of the beginning and end of each lineament. The density analysis is done by creating a network of square cells, and counting the number of lineaments that are contained within each cell, that have one of their ends within the cell or that cross-cut the cell boundary. The lengths of lineaments are then calculated. To obtain a representative density map the cell size must be fixed according to: (1) the average lineament length; (2) the distance between the lineaments; and (3) the boundaries of zones with low densities due to lithology or outcrop features. An example from the Neogene Duero Basin (Northern Spain) is provided to test the reliability of the density maps obtained with different cell sizes.

  14. Information on estimating local government highway bonds

    DOT National Transportation Integrated Search

    1973-06-01

    The theory of traffic flow following a lane blockage on a multi-lane freeway has been developed. Numerical results have been obtained and are presented both for the steady state case where the traffic density remains constant and the non-steady state...

  15. Analysis of percent density estimates from digital breast tomosynthesis projection images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Zhang, Cuiping; Yaffe, Martin J.; Maidment, Andrew D. A.

    2007-03-01

    Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates from DBT source projections since the results would be independent of the reconstruction method. We estimated PD from MLO mammograms (PD M) and from individual DBT projections (PD T). We observed good agreement between PD M and PD T from the central projection images of 40 women. This suggests that variations in breast positioning, dose, and scatter between mammography and DBT do not negatively affect PD estimation. The PD T estimated from individual DBT projections of nine women varied with the angle between the projections. This variation is caused by the 3D arrangement of the breast dense tissue and the acquisition geometry.

  16. Near surface bulk density estimates of NEAs from radar observations and permittivity measurements of powdered geologic material

    NASA Astrophysics Data System (ADS)

    Hickson, Dylan; Boivin, Alexandre; Daly, Michael G.; Ghent, Rebecca; Nolan, Michael C.; Tait, Kimberly; Cunje, Alister; Tsai, Chun An

    2018-05-01

    The variations in near-surface properties and regolith structure of asteroids are currently not well constrained by remote sensing techniques. Radar is a useful tool for such determinations of Near-Earth Asteroids (NEAs) as the power of the reflected signal from the surface is dependent on the bulk density, ρbd, and dielectric permittivity. In this study, high precision complex permittivity measurements of powdered aluminum oxide and dunite samples are used to characterize the change in the real part of the permittivity with the bulk density of the sample. In this work, we use silica aerogel for the first time to increase the void space in the samples (and decrease the bulk density) without significantly altering the electrical properties. We fit various mixing equations to the experimental results. The Looyenga-Landau-Lifshitz mixing formula has the best fit and the Lichtenecker mixing formula, which is typically used to approximate planetary regolith, does not model the results well. We find that the Looyenga-Landau-Lifshitz formula adequately matches Lunar regolith permittivity measurements, and we incorporate it into an existing model for obtaining asteroid regolith bulk density from radar returns which is then used to estimate the bulk density in the near surface of NEA's (101955) Bennu and (25143) Itokawa. Constraints on the material properties appropriate for either asteroid give average estimates of ρbd = 1.27 ± 0.33g/cm3 for Bennu and ρbd = 1.68 ± 0.53g/cm3 for Itokawa. We conclude that our data suggest that the Looyenga-Landau-Lifshitz mixing model, in tandem with an appropriate radar scattering model, is the best method for estimating bulk densities of regoliths from radar observations of airless bodies.

  17. Indirect measurement of lung density and air volume from electrical impedance tomography (EIT) data.

    PubMed

    Nebuya, Satoru; Mills, Gary H; Milnes, Peter; Brown, Brian H

    2011-12-01

    This paper describes a method for estimating lung density, air volume and changes in fluid content from a non-invasive measurement of the electrical resistivity of the lungs. Resistivity in Ω m was found by fitting measured electrical impedance tomography (EIT) data to a finite difference model of the thorax. Lung density was determined by comparing the resistivity of the lungs, measured at a relatively high frequency, with values predicted from a published model of lung structure. Lung air volume can then be calculated if total lung weight is also known. Temporal changes in lung fluid content will produce proportional changes in lung density. The method was implemented on EIT data, collected using eight electrodes placed in a single plane around the thorax, from 46 adult male subjects and 36 adult female subjects. Mean lung densities (±SD) of 246 ± 67 and 239 ± 64 kg m(-3), respectively, were obtained. In seven adult male subjects estimates of 1.68 ± 0.30, 3.42 ± 0.49 and 4.40 ± 0.53 l in residual volume, functional residual capacity and vital capacity, respectively, were obtained. Sources of error are discussed. It is concluded that absolute differences in lung density of about 30% and changes over time of less than 30% should be detected using the current technology in normal subjects. These changes would result from approximately 300 ml increase in lung fluid. The method proposed could be used for non-invasive monitoring of total lung air and fluid content in normal subjects but needs to be assessed in patients with lung disease.

  18. Estimation of Δ R/ R values by benchmark study of the Mössbauer Isomer shifts for Ru, Os complexes using relativistic DFT calculations

    NASA Astrophysics Data System (ADS)

    Kaneko, Masashi; Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru

    2017-11-01

    The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for 99Ru and 189Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both 99Ru and 189Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of Δ R/ R, which is an important nuclear constant, for 99Ru and 189Os nuclides by using the benchmark results. The sign of the calculated Δ R/ R values is consistent with the predicted data for 99Ru and 189Os. We obtain computationally the Δ R/ R values of 99Ru and 189Os (36.2 keV) as 2.35×10-4 and -0.20×10-4, respectively, at B3LYP level for SARC basis set.

  19. Analysis of Ion Composition Estimation Accuracy for Incoherent Scatter Radars

    NASA Astrophysics Data System (ADS)

    Martínez Ledesma, M.; Diaz, M. A.

    2017-12-01

    The Incoherent Scatter Radar (ISR) is one of the most powerful sounding methods developed to estimate the Ionosphere. This radar system determines the plasma parameters by sending powerful electromagnetic pulses to the Ionosphere and analyzing the received backscatter. This analysis provides information about parameters such as electron and ion temperatures, electron densities, ion composition, and ion drift velocities. Nevertheless in some cases the ISR analysis has ambiguities in the determination of the plasma characteristics. It is of particular relevance the ion composition and temperature ambiguity obtained between the F1 and the lower F2 layers. In this case very similar signals are obtained with different mixtures of molecular ions (NO2+ and O2+) and atomic oxygen ions (O+), and consequently it is not possible to completely discriminate between them. The most common solution to solve this problem is the use of empirical or theoretical models of the ionosphere in the fitting of ambiguous data. More recent works take use of parameters estimated from the Plasma Line band of the radar to reduce the number of parameters to determine. In this work we propose to determine the error estimation of the ion composition ambiguity when using Plasma Line electron density measurements. The sensibility of the ion composition estimation has been also calculated depending on the accuracy of the ionospheric model, showing that the correct estimation is highly dependent on the capacity of the model to approximate the real values. Monte Carlo simulations of data fitting at different signal to noise (SNR) ratios have been done to obtain valid and invalid estimation probability curves. This analysis provides a method to determine the probability of erroneous estimation for different signal fluctuations. Also it can be used as an empirical method to compare the efficiency of the different algorithms and methods on when solving the ion composition ambiguity.

  20. Feasibility of anomaly detection and characterization using trans-admittance mammography with 60 × 60 electrode array

    NASA Astrophysics Data System (ADS)

    Zhao, Mingkang; Wi, Hun; Lee, Eun Jung; Woo, Eung Je; In Oh, Tong

    2014-10-01

    Electrical impedance imaging has the potential to detect an early stage of breast cancer due to higher admittivity values compared with those of normal breast tissues. The tumor size and extent of axillary lymph node involvement are important parameters to evaluate the breast cancer survival rate. Additionally, the anomaly characterization is required to distinguish a malignant tumor from a benign tumor. In order to overcome the limitation of breast cancer detection using impedance measurement probes, we developed the high density trans-admittance mammography (TAM) system with 60 × 60 electrode array and produced trans-admittance maps obtained at several frequency pairs. We applied the anomaly detection algorithm to the high density TAM system for estimating the volume and position of breast tumor. We tested four different sizes of anomaly with three different conductivity contrasts at four different depths. From multifrequency trans-admittance maps, we can readily observe the transversal position and estimate its volume and depth. Specially, the depth estimated values were obtained accurately, which were independent to the size and conductivity contrast when applying the new formula using Laplacian of trans-admittance map. The volume estimation was dependent on the conductivity contrast between anomaly and background in the breast phantom. We characterized two testing anomalies using frequency difference trans-admittance data to eliminate the dependency of anomaly position and size. We confirmed the anomaly detection and characterization algorithm with the high density TAM system on bovine breast tissue. Both results showed the feasibility of detecting the size and position of anomaly and tissue characterization for screening the breast cancer.

  1. SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.

    PubMed

    Cao, Yuan; He, Haibo; Man, Hong

    2012-08-01

    In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.

  2. Sap Flow Estimate Of Watershed-Scale Transpiration

    NASA Astrophysics Data System (ADS)

    Kumagai, T.; Aoki, S.; Shimizu, T.; Otsuki, K.

    2006-12-01

    The present study examined how to obtain sufficient information to extrapolate watershed-scale transpiration in a Japanese cedar (Cryptomeria japonica D. Don) forest from sap flow measurements of available individual trees. In this study, we conducted measurements of tree biometrics and tree-to-tree and radial variations in xylem sap flux density (Fd) in two different stand plots, an upper slope plot (UP) and lower slope plot (LP), during the growing season with significant variations in environmental factors. The manner in which the mean stand sap flux density (JS) and tree stem allometric relationship (diameter at breast height (DBH) versus sapwood area (AS_tree)) vary between the two stands along the slope of the watershed was then investigated. After these analyses, appropriate sample sizes for estimations of representative JS values in the stand were also determined. The results demonstrated that a unique or general function allowed description of the allometric relationship along the slope, but the data for its formulation needed to be obtained in both UP and LP. They also revealed that JS in UP and LP were almost the same during the study period despite differences in tree density and size between the two plots. This implies that JS measured in a partial stand within a watershed is a reasonable estimator of the values of other stands, and that stand sapwood area calculated by AS_tree is a strong determinant of water-use in a forest watershed. To estimate JS in both an UP and LP, at least 10 trees should be sampled, but not necessarily more than this.

  3. Thermoelectric properties of bismuth telluride nanoplate thin films determined using combined infrared spectroscopy and first-principles calculation

    NASA Astrophysics Data System (ADS)

    Wada, Kodai; Tomita, Koji; Takashiri, Masayuki

    2018-06-01

    The thermoelectric properties of bismuth telluride (Bi2Te3) nanoplate thin films were estimated using combined infrared spectroscopy and first-principles calculation, followed by comparing the estimated properties with those obtained using the standard electrical probing method. Hexagonal single-crystalline Bi2Te3 nanoplates were first prepared using solvothermal synthesis, followed by preparing Bi2Te3 nanoplate thin films using the drop-casting technique. The nanoplates were joined by thermally annealing them at 250 °C in Ar (95%)–H2 (5%) gas (atmospheric pressure). The electronic transport properties were estimated by infrared spectroscopy using the Drude model, with the effective mass being determined from the band structure using first-principles calculations based on the density functional theory. The electrical conductivity and Seebeck coefficient obtained using the combined analysis were higher than those obtained using the standard electrical probing method, probably because the contact resistance between the nanoplates was excluded from the estimation procedure of the combined analysis method.

  4. Assessment of the reliability of human corneal endothelial cell-density estimates using a noncontact specular microscope.

    PubMed

    Doughty, M J; Müller, A; Zaman, M L

    2000-03-01

    We sought to determine the variance in endothelial cell density (ECD) estimates for human corneal endothelia. Noncontact specular micrographs were obtained from white subjects without any history of contact lens wear, or major eye disease or surgery; subjects were within four age groups (children, young adults, older adults, senior citizens). The endothelial image was scanned, and the areas from > or =75 cells measured from an overlay by planimetry. The cell-area values were used to calculate the ECD repeatedly so that the intra- and intersubject variation in an average ECD estimate could be made by using different numbers of cells (5, 10, 15, etc.). An average ECD of 3,519 cells/mm2 (range, 2,598-5,312 cells/mm2) was obtained of counts of 75 cells/ endothelium from individuals aged 6-83 years. Average ECD estimates in each age group were 4,124, 3,457, 3,360, and 3,113 cells/mm2, respectively. Analysis of intersubject variance revealed that ECD estimates would be expected to be no better than +/-10% if only 25 cells were measured per endothelium, but approach +/-2% if 75 cells are measured. In assessing the corneal endothelium by noncontact specular microscopy, cell count should be given, and this should be > or =75/ endothelium for an expected variance to be at a level close to that recommended for monitoring age-, stress-, or surgery-related changes.

  5. A new approach to evaluate gamma-ray measurements

    NASA Technical Reports Server (NTRS)

    Dejager, O. C.; Swanepoel, J. W. H.; Raubenheimer, B. C.; Vandervalt, D. J.

    1985-01-01

    Misunderstandings about the term random samples its implications may easily arise. Conditions under which the phases, obtained from arrival times, do not form a random sample and the dangers involved are discussed. Watson's U sup 2 test for uniformity is recommended for light curves with duty cycles larger than 10%. Under certain conditions, non-parametric density estimation may be used to determine estimates of the true light curve and its parameters.

  6. Plasma dynamics near critical density inferred from direct measurements of laser hole boring

    NASA Astrophysics Data System (ADS)

    Gong, Chao; Tochitsky, Sergei Ya.; Fiuza, Frederico; Pigeon, Jeremy J.; Joshi, Chan

    2016-06-01

    We have used multiframe picosecond optical interferometry to make direct measurements of the hole boring velocity, vHB, of the density cavity pushed forward by a train of C O2 laser pulses in a near critical density helium plasma. As the pulse train intensity rises, the increasing radiation pressure of each pulse pushes the density cavity forward and the plasma electrons are strongly heated. After the peak laser intensity, the plasma pressure exerted by the heated electrons strongly impedes the hole boring process and the vHB falls rapidly as the laser pulse intensity falls at the back of the laser pulse train. A heuristic theory is presented that allows the estimation of the plasma electron temperature from the measurements of the hole boring velocity. The measured values of vHB, and the estimated values of the heated electron temperature as a function of laser intensity are in reasonable agreement with those obtained from two-dimensional numerical simulations.

  7. Plasma dynamics near critical density inferred from direct measurements of laser hole boring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Chao; Tochitsky, Sergei Ya.; Fiuza, Frederico

    Here, we use multiframe picosecond optical interferometry to make direct measurements of the hole boring velocity, vHB, of the density cavity pushed forward by a train of CO 2 laser pulses in a near critical density helium plasma. As the pulse train intensity rises, the increasing radiation pressure of each pulse pushes the density cavity forward and the plasma electrons are strongly heated. After the peak laser intensity, the plasma pressure exerted by the heated electrons strongly impedes the hole boring process and the vHB falls rapidly as the laser pulse intensity falls at the back of the laser pulsemore » train. We present a heuristic theory that allows the estimation of the plasma electron temperature from the measurements of the hole boring velocity. Furthermore, the measured values of v HB, and the estimated values of the heated electron temperature as a function of laser intensity are in reasonable agreement with those obtained from two-dimensional numerical simulations.« less

  8. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  9. Plasma dynamics near critical density inferred from direct measurements of laser hole boring.

    PubMed

    Gong, Chao; Tochitsky, Sergei Ya; Fiuza, Frederico; Pigeon, Jeremy J; Joshi, Chan

    2016-06-01

    We have used multiframe picosecond optical interferometry to make direct measurements of the hole boring velocity, v_{HB}, of the density cavity pushed forward by a train of CO_{2} laser pulses in a near critical density helium plasma. As the pulse train intensity rises, the increasing radiation pressure of each pulse pushes the density cavity forward and the plasma electrons are strongly heated. After the peak laser intensity, the plasma pressure exerted by the heated electrons strongly impedes the hole boring process and the v_{HB} falls rapidly as the laser pulse intensity falls at the back of the laser pulse train. A heuristic theory is presented that allows the estimation of the plasma electron temperature from the measurements of the hole boring velocity. The measured values of v_{HB}, and the estimated values of the heated electron temperature as a function of laser intensity are in reasonable agreement with those obtained from two-dimensional numerical simulations.

  10. Plasma dynamics near critical density inferred from direct measurements of laser hole boring

    DOE PAGES

    Gong, Chao; Tochitsky, Sergei Ya.; Fiuza, Frederico; ...

    2017-06-24

    Here, we use multiframe picosecond optical interferometry to make direct measurements of the hole boring velocity, vHB, of the density cavity pushed forward by a train of CO 2 laser pulses in a near critical density helium plasma. As the pulse train intensity rises, the increasing radiation pressure of each pulse pushes the density cavity forward and the plasma electrons are strongly heated. After the peak laser intensity, the plasma pressure exerted by the heated electrons strongly impedes the hole boring process and the vHB falls rapidly as the laser pulse intensity falls at the back of the laser pulsemore » train. We present a heuristic theory that allows the estimation of the plasma electron temperature from the measurements of the hole boring velocity. Furthermore, the measured values of v HB, and the estimated values of the heated electron temperature as a function of laser intensity are in reasonable agreement with those obtained from two-dimensional numerical simulations.« less

  11. Total photoelectron yield spectroscopy of energy distribution of electronic states density at GaN surface and SiO2/GaN interface

    NASA Astrophysics Data System (ADS)

    Ohta, Akio; Truyen, Nguyen Xuan; Fujimura, Nobuyuki; Ikeda, Mitsuhisa; Makihara, Katsunori; Miyazaki, Seiichi

    2018-06-01

    The energy distribution of the electronic state density of wet-cleaned epitaxial GaN surfaces and SiO2/GaN structures has been studied by total photoelectron yield spectroscopy (PYS). By X-ray photoelectron spectroscopy (XPS) analysis, the energy band diagram for a wet-cleaned epitaxial GaN surface such as the energy level of the valence band top and electron affinity has been determined to obtain a better understanding of the measured PYS signals. The electronic state density of GaN surface with different carrier concentrations in the energy region corresponding to the GaN bandgap has been evaluated. Also, the interface defect state density of SiO2/GaN structures was also estimated by not only PYS analysis but also capacitance–voltage (C–V) characteristics. We have demonstrated that PYS analysis enables the evaluation of defect state density filled with electrons at the SiO2/GaN interface in the energy region corresponding to the GaN midgap, which is difficult to estimate by C–V measurement of MOS capacitors.

  12. Notes on the birth-death prior with fossil calibrations for Bayesian estimation of species divergence times.

    PubMed

    Dos Reis, Mario

    2016-07-19

    Constructing a multi-dimensional prior on the times of divergence (the node ages) of species in a phylogeny is not a trivial task, in particular, if the prior density is the result of combining different sources of information such as a speciation process with fossil calibration densities. Yang & Rannala (2006 Mol. Biol. Evol 23, 212-226. (doi:10.1093/molbev/msj024)) laid out the general approach to combine the birth-death process with arbitrary fossil-based densities to construct a prior on divergence times. They achieved this by calculating the density of node ages without calibrations conditioned on the ages of the calibrated nodes. Here, I show that the conditional density obtained by Yang & Rannala is misspecified. The misspecified density can sometimes be quite strange-looking and can lead to unintentionally informative priors on node ages without fossil calibrations. I derive the correct density and provide a few illustrative examples. Calculation of the density involves a sum over a large set of labelled histories, and so obtaining the density in a computer program seems hard at the moment. A general algorithm that may provide a way forward is given.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. © 2016 The Author(s).

  13. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  14. Estimating black bear population density and genetic diversity at Tensas River, Louisiana using microsatellite DNA markers

    USGS Publications Warehouse

    Boersen, Mark R.; Clark, Joseph D.; King, Tim L.

    2003-01-01

    The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

  15. Electron density distribution and solar plasma correction of radio signals using MGS, MEX, and VEX spacecraft navigation data and its application to planetary ephemerides

    NASA Astrophysics Data System (ADS)

    Verma, A. K.; Fienga, A.; Laskar, J.; Issautier, K.; Manche, H.; Gastineau, M.

    2013-02-01

    The Mars Global Surveyor (MGS), Mars Express (MEX), and Venus Express (VEX) experienced several superior solar conjunctions. These conjunctions cause severe degradations of radio signals when the line of sight between the Earth and the spacecraft passes near to the solar corona region. The primary objective of this work is to deduce a solar corona model from the spacecraft navigation data acquired at the time of solar conjunctions and to estimate its average electron density. The corrected or improved data are then used to fit the dynamical modeling of the planet motions, called planetary ephemerides. We analyzed the radio science raw data of the MGS spacecraft using the orbit determination software GINS. The range bias, obtained from GINS and provided by ESA for MEX and VEX, are then used to derive the electron density profile. These profiles are obtained for different intervals of solar distances: from 12 R⊙ to 215 R⊙ for MGS, 6 R⊙ to 152 R⊙ for MEX, and from 12 R⊙ to 154 R⊙ for VEX. They are acquired for each spacecraft individually, for ingress and egress phases separately and both phases together, for different types of solar winds (fast, slow), and for solar activity phases (minimum, maximum). We compared our results with the previous estimations that were based on in situ measurements, and on solar type III radio and radio science studies made at different phases of solar activity and at different solar wind states. Our results are consistent with estimations obtained by these different methods. Moreover, fitting the planetary ephemerides including complementary data that were corrected for the solar corona perturbations, noticeably improves the extrapolation capability of the planetary ephemerides and the estimation of the asteroids masses. Tables 5, 6 and Appendix A are available in electronic form at http://www.aanda.org

  16. SOME ASPECTS OF RECENT FINDINGS PERTAINING TO THE BODY COMPOSITION OF ATHLETES, OBESE INDIVIDUALS AND PATIENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behnke, A.R.; Taylor, W.L.

    Whole-body density determinations are reported for a small group of athletes (weight lifters), and an analysis is presented of data derived from several investigations in which similar techniques were employed to measure total body water and the total exchangeable sodium, chloride (bromine space), and potassium in the body. The mean value for body density (1.080) obtained on the athletes was similar to that obtained previously on professional football players and much higher than the mean value usually obtained on young men (-- 1.060). In addition to the low body fat content characteristic of the athletes, the ratio of excha&eable K/submore » e/ to Cl/sub e/ was higher in these men than in men of average physique. In turn, the values for K/sub e/Cl/sub e/ were even lower in obese individuals and in patients. In healthy individuals, the sum (K/sub e/ + Cl/sub e/) is highly correlated (r = 0.99) with total body water, and this finding provides an independent estimate of lean body weight. In patients afflicted with certain types of chronic diseases, particularly those associated with the edematous state, the exchangeable Na/sub e/ to K/sub e/ ratio is strikingly higher than it is in healthy individuals. Estimates of the amount of transudate in edematous patients may be made from analyses of total body water andd total exchangeable Na/sub e/ and K/sub e/. Additional determinations, such as whole body density and red cell mass, are required to assess accurately the size of the lean body mass in these patients. Normal adult lean body size prior to illness may be estimated from skeletal measurements. (auth)« less

  17. Dense solar wind cloud geometries deduced from comparisons of radio signal delay and in situ plasma measurements

    NASA Technical Reports Server (NTRS)

    Landt, J. A.

    1974-01-01

    The geometries of dense solar wind clouds are estimated by comparing single-location measurements of the solar wind plasma with the average of the electron density obtained by radio signal delay measurements along a radio path between earth and interplanetary spacecraft. Several of these geometries agree with the current theoretical spatial models of flare-induced shock waves. A new class of spatially limited structures that contain regions with densities greater than any observed in the broad clouds is identified. The extent of a cloud was found to be approximately inversely proportional to its density.

  18. Density, refractive index, interfacial tension, and viscosity of ionic liquids [EMIM][EtSO4], [EMIM][NTf2], [EMIM][N(CN)2], and [OMA][NTf2] in dependence on temperature at atmospheric pressure.

    PubMed

    Fröba, Andreas P; Kremer, Heiko; Leipertz, Alfred

    2008-10-02

    The density, refractive index, interfacial tension, and viscosity of ionic liquids (ILs) [EMIM][EtSO 4] (1-ethyl-3-methylimidazolium ethylsulfate), [EMIM][NTf 2] (1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide), [EMIM][N(CN) 2] (1-ethyl-3-methylimidazolium dicyanimide), and [OMA][NTf 2] (trioctylmethylammonium bis(trifluoromethylsulfonyl)imide) were studied in dependence on temperature at atmospheric pressure both by conventional techniques and by surface light scattering (SLS). A vibrating tube densimeter was used for the measurement of density at temperatures from (273.15 to 363.15) K and the results have an expanded uncertainty ( k = 2) of +/-0.02%. Using an Abbe refractometer, the refractive index was measured for temperatures between (283.15 and 313.15) K with an expanded uncertainty ( k = 2) of about +/-0.0005. The interfacial tension was obtained from the pendant drop technique at a temperature of 293.15 K with an expanded uncertainty ( k = 2) of +/-1%. For higher and lower temperatures, the interfacial tension was estimated by an adequate prediction scheme based on the datum at 293.15 K and the temperature dependence of density. For the ILs studied within this work, at a first order approximation, the quantity directly accessible by the SLS technique was the ratio of surface tension to dynamic viscosity. By combining the experimental results of the SLS technique with density and interfacial tension from conventional techniques, the dynamic viscosity could be obtained for temperatures between (273.15 and 333.15) K with an estimated expanded uncertainty ( k = 2) of less than +/-3%. The measured density, refractive index, and viscosity are represented by interpolating expressions with differences between the experimental and calculated values that are comparable with but always smaller than the expanded uncertainties ( k = 2). Besides a comparison with the literature, the influence of structural variations on the thermophysical properties of the ILs is discussed in detail. The viscosities mostly agree with values reported in the literature within the combined estimated expanded uncertainties ( k = 2) of the measurements while our density and interfacial tension data differ by more than +/-1% and +/-5%.

  19. Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials

    DOE PAGES

    Carleton, James B.; D’Amore, Antonio; Feaver, Kristen R.; ...

    2014-10-13

    Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. This paper addresses these issues in two ways. First, using methods of geometric probability, we develop theoretical estimates for the mean linear and areal fiber intersection densities for 2-D fibrous networks. These densities are expressed in terms of the fiber densitymore » and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of 2-D fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of scanning electron microscope images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. Finally, the methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data.« less

  20. Drinking, driving, and crashing: a traffic-flow model of alcohol-related motor vehicle accidents.

    PubMed

    Gruenewald, Paul J; Johnson, Fred W

    2010-03-01

    This study examined the influence of on-premise alcohol-outlet densities and of drinking-driver densities on rates of alcohol-related motor vehicle crashes. A traffic-flow model is developed to represent geographic relationships between residential locations of drinking drivers, alcohol outlets, and alcohol-related motor vehicle crashes. Cross-sectional and time-series cross-sectional spatial analyses were performed using data collected from 144 geographic units over 4 years. Data were obtained from archival and survey sources in six communities. Archival data were obtained within community areas and measured activities of either the resident population or persons visiting these communities. These data included local and highway traffic flow, locations of alcohol outlets, population density, network density of the local roadway system, and single-vehicle nighttime (SVN) crashes. Telephone-survey data obtained from residents of the communities were used to estimate the size of the resident drinking and driving population. Cross-sectional analyses showed that effects relating on-premise densities to alcohol-related crashes were moderated by highway trafficflow. Depending on levels of highway traffic flow, 10% greater densities were related to 0% to 150% greater rates of SVN crashes. Time-series cross-sectional analyses showed that changes in the population pool of drinking drivers and on-premise densities interacted to increase SVN crash rates. A simple traffic-flow model can assess the effects of on-premise alcohol-outlet densities and of drinking-driver densities as they vary across communities to produce alcohol-related crashes. Analyses based on these models can usefully guide policy decisions on the sitting of on-premise alcohol outlets.

  1. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-07-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  2. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-04-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  3. Density, temperature, and composition of the North American lithosphere—New insights from a joint analysis of seismic, gravity, and mineral physics data: 2. Thermal and compositional model of the upper mantle

    NASA Astrophysics Data System (ADS)

    Tesauro, Magdala; Kaban, Mikhail K.; Mooney, Walter D.; Cloetingh, Sierd A. P. L.

    2014-12-01

    Temperature and compositional variations of the North American (NA) lithospheric mantle are estimated using a new inversion technique introduced in Part 1, which allows us to jointly interpret seismic tomography and gravity data, taking into account depletion of the lithospheric mantle beneath the cratonic regions. The technique is tested using two tomography models (NA07 and SL2013sv) and different lithospheric density models. The first density model (Model I) reproduces the typical compositionally stratified lithospheric mantle, which is consistent with xenolith samples from the central Slave craton, while the second one (Model II) is based on the direct inversion of the residual gravity and residual topography. The results obtained, both in terms of temperature and composition, are more strongly influenced by the input models derived from seismic tomography, rather than by the choice of lithospheric density Model I versus Model II. The final temperatures estimated in the Archean lithospheric root are up to 150°C higher than in the initial thermal models obtained using a laterally and vertically uniform "fertile" compositional model and are in agreement with temperatures derived from xenolith data. Therefore, the effect of the compositional variations cannot be neglected when temperatures of the cratonic lithospheric mantle are estimated. Strong negative compositional density anomalies (<-0.03 g/cm3), corresponding to Mg # (100 × Mg/(Mg + Fe)) >92, characterize the lithospheric mantle of the northwestern part of the Superior craton and the central part of the Slave and Churchill craton, according to both tomographic models. The largest discrepancies between the results based on different tomography models are observed in the Proterozoic regions, such as the Trans Hudson Orogen (THO), Rocky Mountains, and Colorado Plateau, which appear weakly depleted (>-0.025 g/cm3 corresponding to Mg # ˜91) when model NA07 is used, or locally characterized by high-density bodies when model SL2013sv is used. The former results are in agreement with those based on the interpretation of xenolith data. The high-density bodies might be interpreted as fragments of subducted slabs or of the advection of the lithospheric mantle induced from the eastward-directed flat slab subduction. The selection of a seismic tomography model plays a significant role when estimating lithospheric density, temperature, and compositional heterogeneity. The consideration of the results of more than one model gives a more complete picture of the possible compositional variations within the NA lithospheric mantle.

  4. Motor unit number estimation based on high-density surface electromyography decomposition.

    PubMed

    Peng, Yun; He, Jinbao; Yao, Bo; Li, Sheng; Zhou, Ping; Zhang, Yingchun

    2016-09-01

    To advance the motor unit number estimation (MUNE) technique using high density surface electromyography (EMG) decomposition. The K-means clustering convolution kernel compensation algorithm was employed to detect the single motor unit potentials (SMUPs) from high-density surface EMG recordings of the biceps brachii muscles in eight healthy subjects. Contraction forces were controlled at 10%, 20% and 30% of the maximal voluntary contraction (MVC). Achieved MUNE results and the representativeness of the SMUP pools were evaluated using a high-density weighted-average method. Mean numbers of motor units were estimated as 288±132, 155±87, 107±99 and 132±61 by using the developed new MUNE at 10%, 20%, 30% and 10-30% MVCs, respectively. Over 20 SMUPs were obtained at each contraction level, and the mean residual variances were lower than 10%. The new MUNE method allows a convenient and non-invasive collection of a large size of SMUP pool with great representativeness. It provides a useful tool for estimating the motor unit number of proximal muscles. The present new MUNE method successfully avoids the use of intramuscular electrodes or multiple electrical stimuli which is required in currently available MUNE techniques; as such the new MUNE method can minimize patient discomfort for MUNE tests. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States.

    PubMed

    Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente

    2017-04-29

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.

  6. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States

    PubMed Central

    Vargas-Melendez, Leandro; Boada, Beatriz L.; Boada, Maria Jesus L.; Gauchia, Antonio; Diaz, Vicente

    2017-01-01

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33% of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle’s parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle’s roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle’s states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm. PMID:28468252

  7. A new estimation of HD/2H2 at high redshift using the spectrum of the quasar J 2123-0050

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.; Balashev, S. A.; Ivanchik, A. V.; Varshalovich, D. A.

    2015-12-01

    We present a new analysis of the quasar spectrum J 2123-0050 obtained using VLT/UVES. The H2/HD absorption system at z = 2.059 was analysed. This system consists of two subsystems with zA = 2.05933 and zB = 2.05955. The HD lines have been detected only in subsystem A with the column density of log N = 13.87 ± 0.06. We have determined the column density of H2 in this subsystem, log N = 17.93 ± 0.01, which is about three times larger than estimation derived early from analyses of quasar spectrum obtained using KECK/HIRES [1]. The derived ratio HD/2H2 = (4.28 ± 0.60) × 10-5 is the largest in quasar spectra, nevertheless it coincides with the primordial deuterium abundance within 2σ error. Additionally, we have found some evidence in the partial covering effect for the H2 system.

  8. Can pelagic forage fish and spawning cisco (Coregonus artedi) biomass in the western arm of Lake Superior be assessed with a single summer survey?

    USGS Publications Warehouse

    Yule, D.L.; Stockwell, J.D.; Schreiner, D.R.; Evrard, L.M.; Balge, M.; Hrabik, T.R.

    2009-01-01

    Management efforts to rehabilitate lake trout Salvelinus namaycush in Lake Superior have been successful and the recent increase in their numbers has led to interest in measuring biomass of pelagic prey fish species important to these predators. Lake Superior cisco Coregonus artedi currently support roe fisheries and determining the sustainability of these fisheries is an important management issue. We conducted acoustic and midwater trawl surveys of the western arm of Lake Superior during three periods: summer (July-August), October, and November 2006 to determine if a single survey can be timed to estimate biomass of both prey fish and spawning cisco. We evaluated our methods by comparing observed trawl catches of small (<250 mm total length) and large fish to expected trawl catches based on acoustic densities in the trawl path. We found the relationship between observed and expected catches approached unity over a wide range of densities, suggesting that our acoustic method provided reasonable estimates of fish density, and that midwater trawling methods were free of species- and size-selectivity issues. Rainbow smelt Osmerus mordax was by number the most common species captured in the nearshore (<80 m bathymetric depth) stratum during all three surveys, while kiyi Coregonus kiyi was predominant offshore except during November. Total biomass estimates of rainbow smelt in the western arm were similar during all three surveys, while total biomass of kiyi was similar between summer and October, but was lower in November. Total biomass of large cisco increased substantially in November, while small bloater Coregonus hoyi biomass was lower. We compared our summer 2006 estimates of total fish biomass to the results of a summer survey in 1997 and obtained similar results. We conclude that the temporal window for obtaining biomass estimates of pelagic prey species in the western arm of Lake Superior is wide (July through October), but estimating spawning cisco abundance is best done with a November survey.

  9. Influence of entanglements on glass transition temperature of polystyrene

    NASA Astrophysics Data System (ADS)

    Ougizawa, Toshiaki; Kinugasa, Yoshinori

    2013-03-01

    Chain entanglement is essential behavior of polymeric molecules and it seems to affect many physical properties such as not only viscosity of melt state but also glass transition temperature (Tg). But we have not attained the quantitative estimation because the entanglement density is considered as an intrinsic value of the polymer at melt state depending on the chemical structure. Freeze-drying method is known as one of the few ways to make different entanglement density sample from dilute solution. In this study, the influence of entanglements on Tg of polystyrene obtained by the freeze-dried method was estimated quantitatively. The freeze-dried samples showed Tg depression with decreasing the concentration of precursor solution due to the lower entanglement density and their depressed Tg would be saturated when the almost no intermolecular entanglement was formed. The molecular weight dependence of the maximum value of Tg depression was discussed.

  10. Can roads be used as transects for primate population surveys?.

    PubMed

    Hilário, Renato R; Rodrigues, Flávio H G; Chiarello, Adriano G; Mourthé, Italo

    2012-01-01

    Line transect distance sampling (LTDS) can be applied to either trails or roads. However, it is likely that sampling along roads might result in biased density estimates. In this paper, we compared the results obtained with LTDS applied on trails and roads for two primate species (Callithrix penicillata and Callicebus nigrifrons) to clarify whether roads are appropriate transects to estimate densities. We performed standard LTDS surveys in two nature reserves in south-eastern Brazil. Effective strip width and population density were different between trails and roads for C. penicillata, but not for C. nigrifrons. The results suggest that roads are not appropriate for use as transects in primate surveys, at least for some species. Further work is required to fully understand this issue, but in the meantime we recommend that researchers avoid using roads as transects or treat roads and trails as covariates when sampling on roads is unavoidable. Copyright © 2012 S. Karger AG, Basel.

  11. On the Correlation Between Biomass and the P-Band Polarisation Phase Difference, and Its Potential for Biomass and Tree Number Density Estimation

    NASA Astrophysics Data System (ADS)

    Soja, Maciej J.; Blomberg, Erik; Ulander, Lars M. H.

    2015-04-01

    In this paper, a significant correlation between the HH/VV phase difference (polarisation phase difference, PPD) and the above-ground biomass (AGB) is observed for incidence angles above 30° in airborne P-band SAR data acquired over two boreal test sites in Sweden. A geometric model is used to explain the dependence of the AGB on tree height, stem radius, and tree number density, whereas a cylinder-over-ground model is used to explain the dependence of the PPD on the same three forest parameters. The models show that forest anisotropy need to be accounted for at P-band in order to obtain a linear relationship between the PPD and the AGB. An approach to the estimation of tree number density is proposed, based on a comparison between the modelled and observed PPDs.

  12. Ionospheric Plasma Drift Analysis Technique Based On Ray Tracing

    NASA Astrophysics Data System (ADS)

    Ari, Gizem; Toker, Cenk

    2016-07-01

    Ionospheric drift measurements provide important information about the variability in the ionosphere, which can be used to quantify ionospheric disturbances caused by natural phenomena such as solar, geomagnetic, gravitational and seismic activities. One of the prominent ways for drift measurement depends on instrumentation based measurements, e.g. using an ionosonde. The drift estimation of an ionosonde depends on measuring the Doppler shift on the received signal, where the main cause of Doppler shift is the change in the length of the propagation path of the signal between the transmitter and the receiver. Unfortunately, ionosondes are expensive devices and their installation and maintenance require special care. Furthermore, the ionosonde network over the world or even Europe is not dense enough to obtain a global or continental drift map. In order to overcome the difficulties related to an ionosonde, we propose a technique to perform ionospheric drift estimation based on ray tracing. First, a two dimensional TEC map is constructed by using the IONOLAB-MAP tool which spatially interpolates the VTEC estimates obtained from the EUREF CORS network. Next, a three dimensional electron density profile is generated by inputting the TEC estimates to the IRI-2015 model. Eventually, a close-to-real situation electron density profile is obtained in which ray tracing can be performed. These profiles can be constructed periodically with a period of as low as 30 seconds. By processing two consequent snapshots together and calculating the propagation paths, we estimate the drift measurements over any coordinate of concern. We test our technique by comparing the results to the drift measurements taken at the DPS ionosonde at Pruhonice, Czech Republic. This study is supported by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  13. Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.

    PubMed

    Borah, Deva K; Voelz, David G

    2007-08-10

    The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.

  14. Land use, forest density, soil mapping, erosion, drainage, salinity limitations

    NASA Technical Reports Server (NTRS)

    Yassoglou, N. J. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The results of analyses show that it is possible to obtain information of practical significance as follows: (1) A quick and accurate estimate of the proper use of the valuable land can be made on the basis of temporal and spectral characteristics of the land features. (2) A rather accurate delineation of the major forest formations in the test areas was achieved on the basis of spatial and spectral characteristics of the studied areas. The forest stands were separated into two density classes; dense forest, and broken forest. On the basis of ERTS-1 data and the existing ground truth information a rather accurate mapping of the major vegetational forms of the mountain ranges can be made. (3) Major soil formations are mapable from ERTS-1 data: recent alluvial soils; soil on quarternary deposits; severely eroded soil and lithosol; and wet soils. (4) An estimation of cost benefits cannot be made accurately at this stage of the investigation. However, a rough estimate of the ratio of the cost for obtaining the same amount information from ERTS-1 data and from conventional operations would be approximately 1:6 to 1:10, in favor of the ERTS-1.

  15. Estimation of body density based on hydrostatic weighing without head submersion in young Japanese adults.

    PubMed

    Demura, S; Sato, S; Kitabayashi, T

    2006-06-01

    This study examined a method of predicting body density based on hydrostatic weighing without head submersion (HWwithoutHS). Donnelly and Sintek (1984) developed a method to predict body density based on hydrostatic weight without head submersion. This method predicts the difference (D) between HWwithoutHS and hydrostatic weight with head submersion (HWwithHS) from anthropometric variables (head length and head width), and then calculates body density using D as a correction factor. We developed several prediction equations to estimate D based on head anthropometry and differences between the sexes, and compared their prediction accuracy with Donnelly and Sintek's equation. Thirty-two males and 32 females aged 17-26 years participated in the study. Multiple linear regression analysis was performed to obtain the prediction equations, and the systematic errors of their predictions were assessed by Bland-Altman plots. The best prediction equations obtained were: Males: D(g) = -164.12X1 - 125.81X2 - 111.03X3 + 100.66X4 + 6488.63, where X1 = head length (cm), X2 = head circumference (cm), X3 = head breadth (cm), X4 = head thickness (cm) (R = 0.858, R2 = 0.737, adjusted R2 = 0.687, standard error of the estimate = 224.1); Females: D(g) = -156.03X1 - 14.03X2 - 38.45X3 - 8.87X4 + 7852.45, where X1 = head circumference (cm), X2 = body mass (g), X3 = head length (cm), X4 = height (cm) (R = 0.913, R2 = 0.833, adjusted R2 = 0.808, standard error of the estimate = 137.7). The effective predictors in these prediction equations differed from those of Donnelly and Sintek's equation, and head circumference and head length were included in both equations. The prediction accuracy was improved by statistically selecting effective predictors. Since we did not assess cross-validity, the equations cannot be used to generalize to other populations, and further investigation is required.

  16. A generalised random encounter model for estimating animal density with remote sensor data.

    PubMed

    Lucas, Tim C D; Moorcroft, Elizabeth A; Freeman, Robin; Rowcliffe, J Marcus; Jones, Kate E

    2015-05-01

    Wildlife monitoring technology is advancing rapidly and the use of remote sensors such as camera traps and acoustic detectors is becoming common in both the terrestrial and marine environments. Current methods to estimate abundance or density require individual recognition of animals or knowing the distance of the animal from the sensor, which is often difficult. A method without these requirements, the random encounter model (REM), has been successfully applied to estimate animal densities from count data generated from camera traps. However, count data from acoustic detectors do not fit the assumptions of the REM due to the directionality of animal signals.We developed a generalised REM (gREM), to estimate absolute animal density from count data from both camera traps and acoustic detectors. We derived the gREM for different combinations of sensor detection widths and animal signal widths (a measure of directionality). We tested the accuracy and precision of this model using simulations of different combinations of sensor detection widths and animal signal widths, number of captures and models of animal movement.We find that the gREM produces accurate estimates of absolute animal density for all combinations of sensor detection widths and animal signal widths. However, larger sensor detection and animal signal widths were found to be more precise. While the model is accurate for all capture efforts tested, the precision of the estimate increases with the number of captures. We found no effect of different animal movement models on the accuracy and precision of the gREM.We conclude that the gREM provides an effective method to estimate absolute animal densities from remote sensor count data over a range of sensor and animal signal widths. The gREM is applicable for count data obtained in both marine and terrestrial environments, visually or acoustically (e.g. big cats, sharks, birds, echolocating bats and cetaceans). As sensors such as camera traps and acoustic detectors become more ubiquitous, the gREM will be increasingly useful for monitoring unmarked animal populations across broad spatial, temporal and taxonomic scales.

  17. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. © 2012 The Authors. Biological Reviews © 2012 Cambridge Philosophical Society.

  18. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  19. A strategy to unveil transient sources of ultra-high-energy cosmic rays

    NASA Astrophysics Data System (ADS)

    Takami, Hajime

    2013-06-01

    Transient generation of ultra-high-energy cosmic rays (UHECRs) has been motivated from promising candidates of UHECR sources such as gamma-ray bursts, flares of active galactic nuclei, and newly born neutron stars and magnetars. Here we propose a strategy to unveil transient sources of UHECRs from UHECR experiments. We demonstrate that the rate of UHECR bursts and/or flares is related to the apparent number density of UHECR sources, which is the number density estimated on the assumption of steady sources, and the time-profile spread of the bursts produced by cosmic magnetic fields. The apparent number density strongly depends on UHECR energies under a given rate of the bursts, which becomes observational evidence of transient sources. It is saturated at the number density of host galaxies of UHECR sources. We also derive constraints on the UHECR burst rate and/or energy budget of UHECRs per source as a function of the apparent source number density by using models of cosmic magnetic fields. In order to obtain a precise constraint of the UHECR burst rate, high event statistics above ˜ 1020 eV for evaluating the apparent source number density at the highest energies and better knowledge on cosmic magnetic fields by future observations and/or simulations to better estimate the time-profile spread of UHECR bursts are required. The estimated rate allows us to constrain transient UHECR sources by being compared with the occurrence rates of known energetic transient phenomena.

  20. Challenges of DNA-based mark-recapture studies of American black bears

    USGS Publications Warehouse

    Settlage, K.E.; Van Manen, F.T.; Clark, J.D.; King, T.L.

    2008-01-01

    We explored whether genetic sampling would be feasible to provide a region-wide population estimate for American black bears (Ursus americanus) in the southern Appalachians, USA. Specifically, we determined whether adequate capture probabilities (p >0.20) and population estimates with a low coefficient of variation (CV <20%) could be achieved given typical agency budget and personnel constraints. We extracted DNA from hair collected from baited barbed-wire enclosures sampled over a 10-week period on 2 study areas: a high-density black bear population in a portion of Great Smoky Mountains National Park and a lower density population on National Forest lands in North Carolina, South Carolina, and Georgia. We identified individual bears by their unique genotypes obtained from 9 microsatellite loci. We sampled 129 and 60 different bears in the National Park and National Forest study areas, respectively, and applied closed mark–recapture models to estimate population abundance. Capture probabilities and precision of the population estimates were acceptable only for sampling scenarios for which we pooled weekly sampling periods. We detected capture heterogeneity biases, probably because of inadequate spatial coverage by the hair-trapping grid. The logistical challenges of establishing and checking a sufficiently high density of hair traps make DNA-based estimates of black bears impractical for the southern Appalachian region. Alternatives are to estimate population size for smaller areas, estimate population growth rates or survival using mark–recapture methods, or use independent marking and recapturing techniques to reduce capture heterogeneity.

  1. Stochastic density functional theory at finite temperatures

    NASA Astrophysics Data System (ADS)

    Cytter, Yael; Rabani, Eran; Neuhauser, Daniel; Baer, Roi

    2018-03-01

    Simulations in the warm dense matter regime using finite temperature Kohn-Sham density functional theory (FT-KS-DFT), while frequently used, are computationally expensive due to the partial occupation of a very large number of high-energy KS eigenstates which are obtained from subspace diagonalization. We have developed a stochastic method for applying FT-KS-DFT, that overcomes the bottleneck of calculating the occupied KS orbitals by directly obtaining the density from the KS Hamiltonian. The proposed algorithm scales as O (" close=")N3T3)">N T-1 and is compared with the high-temperature limit scaling O Semiautomatic estimation of breast density with DM-Scan software.

    PubMed

    Martínez Gómez, I; Casals El Busto, M; Antón Guirao, J; Ruiz Perales, F; Llobet Azpitarte, R

    2014-01-01

    To evaluate the reproducibility of the calculation of breast density with DM-Scan software, which is based on the semiautomatic segmentation of fibroglandular tissue, and to compare it with the reproducibility of estimation by visual inspection. The study included 655 direct digital mammograms acquired using craniocaudal projections. Three experienced radiologists analyzed the density of the mammograms using DM-Scan, and the inter- and intra-observer agreement between pairs of radiologists for the Boyd and BI-RADS® scales were calculated using the intraclass correlation coefficient. The Kappa index was used to compare the inter- and intra-observer agreements with those obtained previously for visual inspection in the same set of images. For visual inspection, the mean interobserver agreement was 0,876 (95% CI: 0,873-0,879) on the Boyd scale and 0,823 (95% CI: 0,818-0,829) on the BI-RADS® scale. The mean intraobserver agreement was 0,813 (95% CI: 0,796-0,829) on the Boyd scale and 0,770 (95% CI: 0,742-0,797) on the BI-RADS® scale. For DM-Scan, the mean inter- and intra-observer agreement was 0,92, considerably higher than the agreement for visual inspection. The semiautomatic calculation of breast density using DM-Scan software is more reliable and reproducible than visual estimation and reduces the subjectivity and variability in determining breast density. Copyright © 2012 SERAM. Published by Elsevier Espana. All rights reserved.

  2. Estimate of size distribution of charged MSPs measured in situ in winter during the WADIS-2 sounding rocket campaign

    NASA Astrophysics Data System (ADS)

    Asmus, Heiner; Staszak, Tristan; Strelnikov, Boris; Lübken, Franz-Josef; Friedrich, Martin; Rapp, Markus

    2017-08-01

    We present results of in situ measurements of mesosphere-lower thermosphere dusty-plasma densities including electrons, positive ions and charged aerosols conducted during the WADIS-2 sounding rocket campaign. The neutral air density was also measured, allowing for robust derivation of turbulence energy dissipation rates. A unique feature of these measurements is that they were done in a true common volume and with high spatial resolution. This allows for a reliable derivation of mean sizes and a size distribution function for the charged meteor smoke particles (MSPs). The mean particle radius derived from Schmidt numbers obtained from electron density fluctuations was ˜ 0.56 nm. We assumed a lognormal size distribution of the charged meteor smoke particles and derived the distribution width of 1.66 based on in situ-measured densities of different plasma constituents. We found that layers of enhanced meteor smoke particles' density measured by the particle detector coincide with enhanced Schmidt numbers obtained from the electron and neutral density fluctuations. Thus, we found that large particles with sizes > 1 nm were stratified in layers of ˜ 1 km thickness and lying some kilometers apart from each other.

  3. Cost and performance model for redox flow batteries

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vilayanur; Crawford, Alasdair; Stephenson, David; Kim, Soowhan; Wang, Wei; Li, Bin; Coffey, Greg; Thomsen, Ed; Graff, Gordon; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2014-02-01

    A cost model is developed for all vanadium and iron-vanadium redox flow batteries. Electrochemical performance modeling is done to estimate stack performance at various power densities as a function of state of charge and operating conditions. This is supplemented with a shunt current model and a pumping loss model to estimate actual system efficiency. The operating parameters such as power density, flow rates and design parameters such as electrode aspect ratio and flow frame channel dimensions are adjusted to maximize efficiency and minimize capital costs. Detailed cost estimates are obtained from various vendors to calculate cost estimates for present, near-term and optimistic scenarios. The most cost-effective chemistries with optimum operating conditions for power or energy intensive applications are determined, providing a roadmap for battery management systems development for redox flow batteries. The main drivers for cost reduction for various chemistries are identified as a function of the energy to power ratio of the storage system. Levelized cost analysis further guide suitability of various chemistries for different applications.

  4. Neutrino Opacity in High Density Nuclear Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Sergio M. dos; Razeira, Moises; Vasconcellos, Cesar A.Z.

    2004-12-02

    We estimate the contribution of the nucleon weak magnetism on the neutrino absorption mean free path inside high density nuclear matter. In the mean field approach, three different ingredients are taken into account: (a) a relativistic generalization of the approach developed by Sanjay et al.; (b) the inclusion of the nucleon weak-magnetism (c) and the pseudo-scalar interaction involving the nucleons. Our main result shows that the neutrino absorption mean free path is three times the corresponding result obtained by those authors.

  5. The densest terrestrial vertebrate

    USGS Publications Warehouse

    Rodda, G.H.; Perry, G.; Rondeau, R.J.; Lazell, J.

    2001-01-01

    An understanding of the abundance of organisms is central to understanding ecology, but many population density estimates are unrepresentative because they were obtained from study areas chosen for the high abundance of the target species. For example, from a pool of 1072 lizard density estimates that we compiled from the literature, we sampled 303 estimates and scored each for its assessment of the degree to which the study site was representative. Less than half (45%) indicated that the study area was chosen to be representative of the population or habitat. An additional 15% reported that individual plots or transects were chosen randomly, but this often indicated only that the sample points were located randomly within a study area chosen for its high abundance of the target species. The remainder of the studies either gave no information or specified that the study area was chosen because the focal species was locally abundant.

  6. Vertical Scale Height of the Topside Ionosphere Around the Korean Peninsula: Estimates from Ionosondes and the Swarm Constellation

    NASA Astrophysics Data System (ADS)

    Park, Jaeheung; Kwak, Young-Sil; Mun, Jun-Chul; Min, Kyoung-Wook

    2015-12-01

    In this study, we estimated the topside scale height of plasma density (Hm) using the Swarm constellation and ionosondes in Korea. The Hm above Korean Peninsula is generally around 50 km. Statistical distributions of the topside scale height exhibited a complex dependence upon local time and season. The results were in general agreement with those of Tulasi Ram et al. (2009), who used the same method to calculate the topside scale height in a mid-latitude region. On the contrary, our results did not fully coincide with those obtained by Liu et al. (2007), who used electron density profiles from Arecibo Incoherent Scatter Radar (ISR) between 1966 and 2002. The disagreement may result from the limitations in our approximation method and data coverage used for estimations, as well as the inherent dependence of Hm on Geographic LONgitude (GLON).

  7. Mapping surface energy balance components by combining landsat thematic mapper and ground-based meteorological data

    USGS Publications Warehouse

    Moran, M.S.; Jackson, R. D.; Raymond, L.H.; Gay, L.W.; Slater, P.N.

    1989-01-01

    Surface energy balance components were evaluated by combining satellite-based spectral data with on-site measurements of solar irradiance, air temperature, wind speed, and vapor pressure. Maps of latent heat flux density (??E) and net radiant flux density (Rn) were produced using Landsat Thematic Mapper (TM) data for three dates: 23 July 1985, 5 April 1986, and 24 June 1986. On each date, a Bowen-ratio apparatus, located in a vegetated field, was used to measure ??E and Rn at a point within the field. Estimates of ??E and Rn were also obtained using radiometers aboard an aircraft flown at 150 m above ground level. The TM-based estimates differed from the Bowen-ratio and aircraft-based estimates by less than 12 % over mature fields of cotton, wheat, and alfalfa, where ??E and Rn ranged from 400 to 700 Wm-2. ?? 1989.

  8. Estimating Allee dynamics before they can be observed: polar bears as a case study.

    PubMed

    Molnár, Péter K; Lewis, Mark A; Derocher, Andrew E

    2014-01-01

    Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species.

  9. Estimating Allee Dynamics before They Can Be Observed: Polar Bears as a Case Study

    PubMed Central

    Molnár, Péter K.; Lewis, Mark A.; Derocher, Andrew E.

    2014-01-01

    Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species. PMID:24427306

  10. Comparison of point counts and territory mapping for detecting effects of forest management on songbirds

    USGS Publications Warehouse

    Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently

    2013-01-01

    Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.

  11. Probing Venus' polar upper atmosphere in situ: Preliminary results of the Venus Express Atmospheric Drag Experiment (VExADE).

    NASA Astrophysics Data System (ADS)

    Rosenblatt, Pascal; Bruinsma, Sean; Mueller-Wodarg, Ingo; Haeusler, Bernd

    On its highly elliptical 24 hour orbit around Venus, the Venus Express (VEx) spacecraft briefly reaches a pericenter altitude of nominally 250 km. Recently, however, dedicated and intense radio tracking campaigns have taken place in August 2008 (campaign1), October 2009 (cam-paign2), February and April 2010 (campaign3), for which the pericenter altitude was lowered to about 175 km in order to be able to probe the upper atmosphere of Venus above the North Pole for the first time ever in-situ. As the spacecraft experiences atmospheric drag, its trajectory is measurably perturbed during the pericenter pass, allowing us to infer total atmospheric mass density at the pericenter altitude. The GINS software (Géodésie par Intégration Numérique e e Simultanées) is used to accurately reconstruct the orbital motion of VEx through an iterative least-squares fitting process to the Doppler tracking data. The drag acceleration is modelled using an initial atmospheric density model (VTS model, A. Hedin). A drag scale factor is estimated for each pericenter pass, which scales Hedin's density model in order to best fit the radio tracking data. About 20 density scale factors have been obtained mainly from the second and third VExADE campaigns, which indicate a lower density by a factor of about one-third than Hedin's model predicts. These first ever polar density measurements at solar minimum have allowed us to construct a diffusive equilibrium density model for Venus' thermosphere, constrained in the lower thermosphere primarily by SPICAV-SOIR measurements and above 175 km by the VExADE drag measurements. The preliminary results of the VExADE cam-paigns show that it is possible to obtain reliable estimates of Venus' upper atmosphere densities at an altitude of around 175 km. Future VExADE campaigns will benefit from the planned further lowering of VEx pericenter altitude to below 170 Km.

  12. Excess density compensation of island herpetofaunal assemblages

    USGS Publications Warehouse

    Rodda, G.H.; Dean-Bradley, K.

    2002-01-01

    Aim Some species reach extraordinary densities on islands. Island assemblages have fewer species, however, and it is possible that island species differ from their mainland counterparts in average mass. Island assemblages could be partitioned differently (fewer species or smaller individuals) from mainland sites without differing in aggregate biomass (density compensation). Our objective was to determine the generality of excess density compensation in island herpetofaunal assemblages.Location Our bounded removal plot data were obtained from Pacific Island sites (Guam, Saipan and Rota), the West Indies (British Virgin Islands), and the Indian Ocean (Ile aux Aigrettes off Mauritius). The literature values were taken from several locales. Other island locations included Barro Colorado Island, Bonaire, Borneo, Philippine Islands, Seychelle Islands, Barrow Island (Australia), North Brother Island (New Zealand), Dominica and Puerto Rico. Mainland sites included Costa Rica, Ivory Coast, Cameroon, Australia, Thailand, Peru, Brazil, Panama and the USA.Method We added our thirty-nine bounded total removal plots from sixteen island habitats to fifteen literature records to obtain seventy-five venues with estimable density and biomass of arboreal or terrestrial herpetofaunal assemblages. These biomass estimates were evaluated geographically and in relation to sampling method, insularity, latitude, disturbance regime, seasonality, community richness, vegetative structure and climate. Direct data on trophic interactions (food availability, parasitism and predation pressure) were generally unavailable. Sampling problems were frequent for arboreal, cryptic and evasive species.Results and main conclusions We found strong evidence that herpetofaunal assemblages on small islands (mostly lizards) exhibit a much greater aggregate density of biomass (kg ha−1) than those of larger islands or mainland assemblages (small islands show excess density compensation). High aggregate biomass density was more strongly associated with the degree of species impoverishment on islands than it was on island area or insularity per se. High aggregate biomass density was not strongly associated with latitude, precipitation, canopy height or a variety of other physical characteristics of the study sites. The association between high aggregate biomass density and species-poor islands is consistent with the effects of a reduced suite of predators on depauperate islands, but other features may also contribute to excess density compensation.

  13. Remote sensing of the ionospheric F layer by use of O I 6300-A and O I 1356-A observations

    NASA Technical Reports Server (NTRS)

    Chandra, S.; Reed, E. I.; Meier, R. R.; Opal, C. B.; Hicks, G. T.

    1975-01-01

    The possibility of using airglow techniques for estimating the electron density and height of the F layer is studied on the basis of a simple relationship between the height of the F2 peak and the column emission rates of the O I 6300 A and O I 1356 A lines. The feasibility of this approach is confirmed by a numerical calculation of F2 peak heights and electron densities from simultaneous measurements of O I 6300 A and O I 1356 A obtained with earth-facing photometers carried by the Ogo 4 satellite. Good agreement is established with the F2 peak heights estimates from top-side and bottom-side ionospheric sounding.

  14. Statistical γ -decay properties of 64Ni and deduced (n ,γ ) cross section of the s -process branch-point nucleus 63Ni

    NASA Astrophysics Data System (ADS)

    Crespo Campo, L.; Bello Garrote, F. L.; Eriksen, T. K.; Görgen, A.; Guttormsen, M.; Hadynska-Klek, K.; Klintefjord, M.; Larsen, A. C.; Renstrøm, T.; Sahin, E.; Siem, S.; Springer, A.; Tornyi, T. G.; Tveten, G. M.

    2016-10-01

    Particle-γ coincidence data have been analyzed to obtain the nuclear level density and the γ -strength function of 64Ni by means of the Oslo method. The level density found in this work is in very good agreement with known energy levels at low excitation energies as well as with data deduced from particle-evaporation measurements at excitation energies above Ex≈5.5 MeV. The experimental γ -strength function presents an enhancement at γ energies below Eγ≈3 MeV and possibly a resonancelike structure centered at Eγ≈9.2 MeV. The obtained nuclear level density and γ -strength function have been used to estimate the (n ,γ ) cross section for the s -process branch-point nucleus 63Ni, of particular interest for astrophysical calculations of elemental abundances.

  15. Determining density of maize canopy. 1: Digitized photography

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; Baumgardner, M. F.; Swain, P. H.

    1972-01-01

    The relationship between different densities of maize (Zea mays L.) canopies and the energy reflected by these canopies was studied. Field plots were laid out, representing four growth stages of maize, on a dark soil and on a very light colored surface soil. Spectral and spatial data were obtained from color and color infrared photography taken from a vertical distance of 10 m above the maize canopies. Estimates of ground cover were related to field measurements of leaf area index. Ground cover was predicted from leaf area index measurements by a second order equation. Color infrared photography proved helpful in determining the density of maize canopy on dark soils. Color photography was useful for determining canopy density on light colored soils. The near infrared dye layer is the most valuable in canopy density determinations.

  16. Radiomic modeling of BI-RADS density categories

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Hadjiiski, Lubomir

    2017-03-01

    Screening mammography is the most effective and low-cost method to date for early cancer detection. Mammographic breast density has been shown to be highly correlated with breast cancer risk. We are developing a radiomic model for BI-RADS density categorization on digital mammography (FFDM) with a supervised machine learning approach. With IRB approval, we retrospectively collected 478 FFDMs from 478 women. As a gold standard, breast density was assessed by an MQSA radiologist based on BI-RADS categories. The raw FFDMs were used for computerized density assessment. The raw FFDM first underwent log-transform to approximate the x-ray sensitometric response, followed by multiscale processing to enhance the fibroglandular densities and parenchymal patterns. Three ROIs were automatically identified based on the keypoint distribution, where the keypoints were obtained as the extrema in the image Gaussian scale-space. A total of 73 features, including intensity and texture features that describe the density and the parenchymal pattern, were extracted from each breast. Our BI-RADS density estimator was constructed by using a random forest classifier. We used a 10-fold cross validation resampling approach to estimate the errors. With the random forest classifier, computerized density categories for 412 of the 478 cases agree with radiologist's assessment (weighted kappa = 0.93). The machine learning method with radiomic features as predictors demonstrated a high accuracy in classifying FFDMs into BI-RADS density categories. Further work is underway to improve our system performance as well as to perform an independent testing using a large unseen FFDM set.

  17. An Evaluation of the Pea Pod System for Assessing Body Composition of Moderately Premature Infants.

    PubMed

    Forsum, Elisabet; Olhager, Elisabeth; Törnqvist, Caroline

    2016-04-22

    (1) BACKGROUND: Assessing the quality of growth in premature infants is important in order to be able to provide them with optimal nutrition. The Pea Pod device, based on air displacement plethysmography, is able to assess body composition of infants. However, this method has not been sufficiently evaluated in premature infants; (2) METHODS: In 14 infants in an age range of 3-7 days, born after 32-35 completed weeks of gestation, body weight, body volume, fat-free mass density (predicted by the Pea Pod software), and total body water (isotope dilution) were assessed. Reference estimates of fat-free mass density and body composition were obtained using a three-component model; (3) RESULTS: Fat-free mass density values, predicted using Pea Pod, were biased but not significantly (p > 0.05) different from reference estimates. Body fat (%), assessed using Pea Pod, was not significantly different from reference estimates. The biological variability of fat-free mass density was 0.55% of the average value (1.0627 g/mL); (4) CONCLUSION: The results indicate that the Pea Pod system is accurate for groups of newborn, moderately premature infants. However, more studies where this system is used for premature infants are needed, and we provide suggestions regarding how to develop this area.

  18. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  19. Combined natural gamma ray spectral/litho-density measurements applied to complex lithologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quirein, J.A.; Gardner, J.S.; Watson, J.T.

    1982-09-01

    Well log data has long been used to provide lithological descriptions of complex formations. Historically, most of the approaches used have been restrictive because they assumed fixed, known, and distinct lithologies for specified zones. The approach described in this paper attempts to alleviate this restriction by estimating the ''probability of a model'' for the models suggested as most likely by the reservoir geology. Lithological variables are simultaneously estimated from response equations for each model and combined in accordance with the probability of each respective model. The initial application of this approach has been the estimation of calcite, quartz, and dolomitemore » in the presence of clays, feldspars, anhydrite, or salt. Estimations were made by using natural gamma ray spectra, photoelectric effect, bulk density, and neutron porosity information. For each model, response equations and parameter selections are obtained from the thorium vs potassium crossplot and the apparent matrix density vs apparent volumetric photoelectric cross section crossplot. The thorium and potassium response equations are used to estimate the volumes of clay and feldspar. The apparent matrix density and volumetric cross section response equations can then be corrected for the presence of clay and feldspar. A test ensures that the clay correction lies within the limits for the assumed lithology model. Results are presented for varying lithologies. For one test well, 6,000 feet were processed in a single pass, without zoning and without adjusting more than one parameter pick. The program recognized sand, limestone, dolomite, clay, feldspar, anhydrite, and salt without analyst intervention.« less

  1. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data.

    PubMed

    González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.

  2. Inter-examination Precision of Magnitude-based Magnetic Resonance Imaging for Estimation of Segmental Hepatic Proton Density Fat Fraction (PDFF) in Obese Subjects

    PubMed Central

    Negrete, Lindsey M.; Middleton, Michael S.; Clark, Lisa; Wolfson, Tanya; Gamst, Anthony C.; Lam, Jessica; Changchien, Chris; Deyoung-Dominguez, Ivan M.; Hamilton, Gavin; Loomba, Rohit; Schwimmer, Jeffrey; Sirlin, Claude B.

    2013-01-01

    Purpose To prospectively describe magnitude-based multi-echo gradient-echo hepatic proton density fat fraction (PDFF) inter-examination precision at 3T. Materials and Methods In this prospective, IRB approved, HIPAA compliant study, written informed consent was obtained from 29 subjects (body mass indexes > 30kg/m2). Three 3T magnetic resonance imaging (MRI) examinations were obtained over 75-90 minutes. Segmental, lobar, and whole liver PDFF were estimated (using three, four, five, or six echoes) by magnitude-based multi-echo MRI in co-localized regions of interest (ROIs). For estimate (using three, four, five, or six echoes), at each anatomic level (segmental, lobar, whole liver), three inter-examination precision metrics were computed: intra-class correlation coefficient (ICC), standard deviation (SD), and range. Results Magnitude-based PDFF estimates using each reconstruction method showed excellent inter-examination precision for each segment (ICC ≥ 0.992; SD ≤ 0.66%; range ≤ 1.24%), lobe (ICC ≥ 0.998; SD ≤ 0.34%; range ≤ 0.64%), and the whole liver (ICC = 0.999; SD ≤ 0.24%; range ≤ 0.45%). Inter-examination precision was unaffected by whether PDFF was estimated using three, four, five, or six echoes. Conclusion Magnitude-based PDFF estimation shows high inter-examination precision at segmental, lobar, and whole liver anatomic levels, supporting its use in clinical care or clinical trials. The results of this study suggest that longitudinal hepatic PDFF change greater than 1.6% is likely to represent signal rather than noise. PMID:24136736

  3. Magnitude and Frequency of Floods on Nontidal Streams in Delaware

    USGS Publications Warehouse

    Ries, Kernell G.; Dillow, Jonathan J.A.

    2006-01-01

    Reliable estimates of the magnitude and frequency of annual peak flows are required for the economical and safe design of transportation and water-conveyance structures. This report, done in cooperation with the Delaware Department of Transportation (DelDOT) and the Delaware Geological Survey (DGS), presents methods for estimating the magnitude and frequency of floods on nontidal streams in Delaware at locations where streamgaging stations monitor streamflow continuously and at ungaged sites. Methods are presented for estimating the magnitude of floods for return frequencies ranging from 2 through 500 years. These methods are applicable to watersheds exhibiting a full range of urban development conditions. The report also describes StreamStats, a web application that makes it easy to obtain flood-frequency estimates for user-selected locations on Delaware streams. Flood-frequency estimates for ungaged sites are obtained through a process known as regionalization, using statistical regression analysis, where information determined for a group of streamgaging stations within a region forms the basis for estimates for ungaged sites within the region. One hundred and sixteen streamgaging stations in and near Delaware with at least 10 years of non-regulated annual peak-flow data available were used in the regional analysis. Estimates for gaged sites are obtained by combining the station peak-flow statistics (mean, standard deviation, and skew) and peak-flow estimates with regional estimates of skew and flood-frequency magnitudes. Example flood-frequency estimate calculations using the methods presented in the report are given for: (1) ungaged sites, (2) gaged locations, (3) sites upstream or downstream from a gaged location, and (4) sites between gaged locations. Regional regression equations applicable to ungaged sites in the Piedmont and Coastal Plain Physiographic Provinces of Delaware are presented. The equations incorporate drainage area, forest cover, impervious area, basin storage, housing density, soil type A, and mean basin slope as explanatory variables, and have average standard errors of prediction ranging from 28 to 72 percent. Additional regression equations that incorporate drainage area and housing density as explanatory variables are presented for use in defining the effects of urbanization on peak-flow estimates throughout Delaware for the 2-year through 500-year recurrence intervals, along with suggestions for their appropriate use in predicting development-affected peak flows. Additional topics associated with the analyses performed during the study are also discussed, including: (1) the availability and description of more than 30 basin and climatic characteristics considered during the development of the regional regression equations; (2) the treatment of increasing trends in the annual peak-flow series identified at 18 gaged sites, with respect to their relations with maximum 24-hour precipitation and housing density, and their use in the regional analysis; (3) calculation of the 90-percent confidence interval associated with peak-flow estimates from the regional regression equations; and (4) a comparison of flood-frequency estimates at gages used in a previous study, highlighting the effects of various improved analytical techniques.

  4. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  5. Bounding filter - A simple solution to lack of exact a priori statistics.

    NASA Technical Reports Server (NTRS)

    Nahi, N. E.; Weiss, I. M.

    1972-01-01

    Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.

  6. (Fe II) emission from high-density regions in the Orion Nebula

    NASA Technical Reports Server (NTRS)

    Bautista, Manuel A.; Pradhan, Anil K.; Osterbrock, Donald E.

    1994-01-01

    Direct spectroscopic evidence of high-density regions in the Orion Nebula, N(sub e) approximately equals 10(exp 5)-10(exp 7)/cu cm, is obtained from the forbidden optical and near-IR (Fe II) emission lines, using new atomic data. Calculations for level populations and line ratios are carried out using 16, 35, and 142 level collisional-radiative models for Fe II. Estimates of Fe(+) abundances derived from the near-infrared and the optical line intensities are consistent with a high density of 10(exp 6)/cu cm in the (Fe II) emitting regions. Important consequences for abundance determinations in the nebula are pointed out.

  7. Real time estimation of generation, extinction and flow of muscle fibre action potentials in high density surface EMG.

    PubMed

    Mesin, Luca

    2015-02-01

    Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Structure of gel phase DMPC determined by X-ray diffraction.

    PubMed Central

    Tristram-Nagle, Stephanie; Liu, Yufeng; Legleiter, Justin; Nagle, John F

    2002-01-01

    The structure of fully hydrated gel phase dimyristoylphosphatidylcholine lipid bilayers was obtained at 10 degrees C. Oriented lipid multilayers were used to obtain high signal-to-noise intensity data. The chain tilt angle and an estimate of the methylene electron density were obtained from wide angle reflections. The chain tilt angle is measured to be 32.3 +/- 0.6 degrees near full hydration, and it does not change as the sample is mildly dehydrated from a repeat spacing of D = 59.9 A to D = 56.5 A. Low angle diffraction peaks were obtained up to the tenth order for 17 samples with variable D and prepared by three different methods with different geometries. In addition to the usual Fourier reconstructions of the electron density profiles, model electron density profiles were fit to all the low angle data simultaneously while constraining the model to include the wide-angle data and the measured lipid volume. Results are obtained for area/lipid (A = 47.2 +/- 0.5 A(2)), the compressibility modulus (K(A) = 500 +/- 100 dyn/cm), various thicknesses, such as the hydrocarbon thickness (2D(C) = 30.3 +/- 0.2 A), and the head-to-head spacing (D(HH) = 40.1 +/- 0.1 A). PMID:12496100

  9. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  10. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  11. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  12. Estimation of old field ecosystem biomass using low altitude imagery

    NASA Technical Reports Server (NTRS)

    Nor, S. M.; Safir, G.; Burton, T. M.; Hook, J. E.; Schultink, G.

    1977-01-01

    Color-infrared photography was used to evaluate the biomass of experimental plots in an old-field ecosystem that was treated with different levels of waste water from a sewage treatment facility. Cibachrome prints at a scale of approximately 1:1,600 produced from 35 mm color infrared slides were used to analyze density patterns using prepared tonal density scales and multicell grids registered to ground panels shown on the photograph. Correlation analyses between tonal density and vegetation biomass obtained from ground samples and harvests were carried out. Correlations between mean tonal density and harvest biomass data gave consistently high coefficients ranging from 0.530 to 0.896 at the 0.001 significance level. Corresponding multiple regression analysis resulted in higher correlation coefficients. The results of this study indicate that aerial infrared photography can be used to estimate standing crop biomass on waste water irrigated old field ecosystems. Combined with minimal ground truth data, this technique could enable managers of wastewater irrigation projects to precisely time harvest of such systems for maximal removal of nutrients in harvested biomass.

  13. Online Reinforcement Learning Using a Probability Density Estimation.

    PubMed

    Agostini, Alejandro; Celaya, Enric

    2017-01-01

    Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.

  14. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  15. Application of the stage-projection model with density-dependent fecundity to the population dynamics of Spanish ibex

    USGS Publications Warehouse

    Escos, J.; Alados, C.L.; Emlen, John M.

    1994-01-01

    A stage-class population model with density-feedback term included was used to identify the most critical parameters determining the population dynamics of female Spanish ibex (Capra pyrenaica) in southern Spain. A population in the Cazorla and Segura mountains is rapidly declining, but the eastern Sierra Nevada population is growing. The stable population density obtained using estimated values of kid and adult survival (0.49 and 0.87, respectively) and with fecundity equal to 0.367 in the absence of density feedback is 12.7 or 16.82 individuals/km2, based on a non-time-lagged and a time-lagged model, respectively. Given the maximum estimate of fecundity and an adult survival rate of 0.87, a kid survival rate of at least 0.41 is required to avoid extinction. At the minimum fecundity estimate, kid survival would have to exceed 0.52. Elasticities were used to estimate the influence of variation in life-cycle parameters on the intrinsic rate of increase. Adult survival is the most critical parameter, while fecundity and juvenile survival are less important. An increase in adult survival from 0.87 to 0.91 in the Cazorla and Segura mountains population would almost stabilize the population in the absence of stochastic variation, while the same increase in the Sierra Nevada population would yield population growth of 4–5% per annum. A reduction in adult survival to 0.83 results in population decline in both cases.

  16. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Star Formation & the Star Formation History of the Universe: Exploring the X-ray and the Multi-wavelength Point of Views

    NASA Astrophysics Data System (ADS)

    Burgarella, Denis; Ciesla, Laure; Boquien, Mederic; Buat, Veronique; Roehlly, Yannick

    2015-09-01

    The star formation rate density traces the formation of stars in the universe. To estimate the star formation rate of galaxies, we can use a wide range of star formation tracers: continuum measurements in most wavelength domains, lines, supernovae and GRBs... All of them have pros and cons. Most of the monochromatic tracers are hampered but the effects of dust. But, before being able to estimate the star formation rate, we first need to obtain a safe estimate to the dust attenuation. The advantage of the X-ray wavelength range is that we can consider it as free from the effect of dust. In this talk, we will estimate how many galaxies we could detect with ATHENA to obtain the star formation density. For this, I will use my recent Herschel paper where the total (UV + IR) star formation rate density was evaluated up to z ~ 4 and provide quantitative figures for what ATHENA will detect as a function of the redshift and the luminosity. ATHENA will need predictions that are in agreement with what we observe in the other wavelength ranges. I will present the code CIGALE (http://cigale.lam.fr). The new and public version of CIGALE (released in April 2015) allows to model the emission of galaxies from the far-ultraviolet to the radio and it can make prediction in any of these wavelength ranges. I will show how galaxy star formation rates can be estimated in a way that combines all the advantages of monochromatic tracers but not the caveats. It should be stressed that we can model the emission of AGNs in the FUV-to-FIR range using several models. Finally, I will explain why we seriously consider to extend CIGALE to the x-ray range to predict the X-ray emission of galaxies including any AGN.

  18. Double the dates and go for Bayes - Impacts of model choice, dating density and quality on chronologies

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Christen, J. Andrés; Bennett, K. D.; Reimer, Paula J.

    2018-05-01

    Reliable chronologies are essential for most Quaternary studies, but little is known about how age-depth model choice, as well as dating density and quality, affect the precision and accuracy of chronologies. A meta-analysis suggests that most existing late-Quaternary studies contain fewer than one date per millennium, and provide millennial-scale precision at best. We use existing and simulated sediment cores to estimate what dating density and quality are required to obtain accurate chronologies at a desired precision. For many sites, a doubling in dating density would significantly improve chronologies and thus their value for reconstructing and interpreting past environmental changes. Commonly used classical age-depth models stop becoming more precise after a minimum dating density is reached, but the precision of Bayesian age-depth models which take advantage of chronological ordering continues to improve with more dates. Our simulations show that classical age-depth models severely underestimate uncertainty and are inaccurate at low dating densities, and also perform poorly at high dating densities. On the other hand, Bayesian age-depth models provide more realistic precision estimates, including at low to average dating densities, and are much more robust against dating scatter and outliers. Indeed, Bayesian age-depth models outperform classical ones at all tested dating densities, qualities and time-scales. We recommend that chronologies should be produced using Bayesian age-depth models taking into account chronological ordering and based on a minimum of 2 dates per millennium.

  19. Precipitation Estimation Using L-Band and C-Band Soil Moisture Retrievals

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Brocca, Luca; Crow, Wade T.; Burgin, Mariko S.; De Lannoy, Gabrielle J. M.

    2016-01-01

    An established methodology for estimating precipitation amounts from satellite-based soil moisture retrievals is applied to L-band products from the Soil Moisture Active Passive (SMAP) and Soil Moisture and Ocean Salinity (SMOS) satellite missions and to a C-band product from the Advanced Scatterometer (ASCAT) mission. The precipitation estimates so obtained are evaluated against in situ (gauge-based) precipitation observations from across the globe. The precipitation estimation skill achieved using the L-band SMAP and SMOS data sets is higher than that obtained with the C-band product, as might be expected given that L-band is sensitive to a thicker layer of soil and thereby provides more information on the response of soil moisture to precipitation. The square of the correlation coefficient between the SMAP-based precipitation estimates and the observations (for aggregations to approximately100 km and 5 days) is on average about 0.6 in areas of high rain gauge density. Satellite missions specifically designed to monitor soil moisture thus do provide significant information on precipitation variability, information that could contribute to efforts in global precipitation estimation.

  20. Shape information from a critical point analysis of calculated electron density maps: application to DNA-drug systems

    NASA Astrophysics Data System (ADS)

    Leherte, L.; Allen, F. H.; Vercauteren, D. P.

    1995-04-01

    A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.

  1. Shape information from a critical point analysis of calculated electron density maps: Application to DNA-drug systems

    NASA Astrophysics Data System (ADS)

    Leherte, Laurence; Allen, Frank H.

    1994-06-01

    A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.

  2. The Baryonic and Dark Matter Distributions in Abell 401

    NASA Astrophysics Data System (ADS)

    Nevalainen, J.; Markevitch, M.; Forman, W.

    1999-11-01

    We combine spatially resolved ASCA temperature data with ROSAT imaging data to constrain the total mass distribution in the cluster A401, assuming that the cluster is in hydrostatic equilibrium, but without the assumption of gas isothermality. We obtain a total mass within the X-ray core (290 h-150 kpc) of 1.2+0.1-0.5×1014 h-150 Msolar at the 90% confidence level, 1.3 times larger than the isothermal estimate. The total mass within r500 (1.7 h-150 Mpc) is M500=0.9+0.3-0.2×1015 h-150 Msolar at 90% confidence, in agreement with the optical virial mass estimate, and 1.2 times smaller than the isothermal estimate. Our M500 value is 1.7 times smaller than that estimated using the mass-temperature scaling law predicted by simulations. The best-fit dark matter density profile scales as r-3.1 at large radii, which is consistent with the Navarro, Frenk & White (NFW) ``universal profile'' as well as the King profile of the galaxy density in A401. From the imaging data, the gas density profile is shallower than the dark matter profile, scaling as r-2.1 at large radii, leading to a monotonically increasing gas mass fraction with radius. Within r500 the gas mass fraction reaches a value of fgas=0.21+0.06-0.05 h-3/250 (90% confidence errors). Assuming that fgas (plus an estimate of the stellar mass) is the universal value of the baryon fraction, we estimate the 90% confidence upper limit of the cosmological matter density to be Ωm<0.31, in conflict with an Einstein-deSitter universe. Even though the NFW dark matter density profile is statistically consistent with the temperature data, its central temperature cusp would lead to convective instability at the center, because the gas density does not have a corresponding peak. One way to reconcile a cusp-shaped total mass profile with the observed gas density profile, regardless of the temperature data, is to introduce a significant nonthermal pressure in the center. Such a pressure must satisfy the hydrostatic equilibrium condition without inducing turbulence. Alternately, significant mass drop-out from the cooling flow would make the temperature less peaked and the NFW profile acceptable. However, the quality of data is not adequate to test this possibility.

  3. Mechanical behaviour of the lithosphere beneath the Adamawa uplift (Cameroon, West Africa) based on gravity data

    NASA Astrophysics Data System (ADS)

    Poudjom Djomani, Y. H.; Diament, M.; Albouy, Y.

    1992-07-01

    The Adamawa massif in Central Cameroon is one of the African domal uplifts of volcanic origin. It is an elongated feature, 200 km wide. The gravity anomalies over the Adamawa uplift were studied to determine the mechanical behaviour of the lithosphere. Two approaches were used to analyse six gravity profiles that are 600 km long and that run perpendicular to the Adamawa trend. Firstly, the coherence function between topography and gravity was interpreted; secondly, source depth estimations by spectral analysis of the gravity data was performed. To get significant information for the interpretation of the experimental coherence function, the length of the profiles was varied from 320 km to 600 km. This treatment allows one to obtain numerical estimates of the coherence function. The coherence function analysis points out that the lithosphere is deflected and thin beneath the Adamawa uplift, and the Effective Elastic Thickness is of about 20 km. To fit the coherence, a load from below needs to be taken into account. This result on the Adamawa massif is of the same order of magnitude as those obtained on other African uplifts such as Hoggar, Darfur and Kenya domes. For the depth estimation, three major density contrasts were found: the shallowest depth (4-15 km) can be correlated to shear zone structures and the associated sedimentary basins beneath the uplift; the second density contrast (18-38 km) corresponds to the Moho; and finally, the last depth (70-90 km) would be the top of the upper mantle and demotes the low density zone beneath the Adamawa uplift.

  4. A new catalogue of Galactic novae: investigation of the MMRD relation and spatial distribution

    NASA Astrophysics Data System (ADS)

    Özdönmez, Aykut; Ege, Ergün; Güver, Tolga; Ak, Tansel

    2018-05-01

    In this study, a new Galactic novae catalogue is introduced collecting important parameters of these sources such as their light-curve parameters, classifications, full width half-maximum (FWHM) of Hα line, distances and interstellar reddening estimates. The catalogue is also published on a website with a search option via a SQL query and an online tool to re-calculate the distance/reddening of a nova from the derived reddening-distance relations. Using the novae in the catalogue, the existence of a maximum magnitude-rate of decline (MMRD) relation in the Galaxy is investigated. Although an MMRD relation was obtained, a significant scattering in the resulting MMRD distribution still exists. We suggest that the MMRD relation likely depends on other parameters in addition to the decline time, as FWHM Hα, the light-curve shapes. Using two different samples depending on the distances in the catalogue and from the derived MMRD relation, the spatial distributions of Galactic novae as a function of their spectral and speed classes were studied. The investigation on the Galactic model parameters implies that best estimates for the local outburst density are 3.6 and 4.2 × 10-10 pc-3 yr-1 with a scale height of 148 and 175 pc, while the space density changes in the range of 0.4-16 × 10-6 pc-3. The local outburst density and scale height obtained in this study infer that the disc nova rate in the Galaxy is in the range of ˜20 to ˜100 yr-1 with an average estimate 67^{+21}_{-17} yr-1.

  5. Nuclear half-lives for {alpha}-radioactivity of elements with 100 {<=} Z {<=} 130

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhury, P. Roy; Samanta, C.; Physics Department, Gottwald Science Center, University of Richmond, Richmond, VA 23173

    2008-11-15

    Theoretical estimates for the half-lives of about 1700 isotopes of heavy elements with 100 {<=} Z {<=} 130 are tabulated using theoretical Q-values. The quantum mechanical tunneling probabilities are calculated within a WKB framework using microscopic nuclear potentials. The microscopic nucleus-nucleus potentials are obtained by folding the densities of interacting nuclei with a density-dependent M3Y effective nucleon-nucleon interaction. The {alpha}-decay half-lives calculated in this formalism using the experimental Q-values were found to be in good agreement over a wide range of experimental data spanning about 20 orders of magnitude. The theoretical Q-values used for the present calculations are extracted frommore » three different mass estimates viz. Myers-Swiatecki, Muntian-Hofmann-Patyk-Sobiczewski, and Koura-Tachibana-Uno-Yamada.« less

  6. Advective and diapycnal diffusive oceanic flux in Tenerife - La Gomera Channel

    NASA Astrophysics Data System (ADS)

    Marrero-Díaz, A.; Rodriguez-Santana, A.; Hernández-Arencibia, M.; Machín, F.; García-Weil, L.

    2012-04-01

    During the year 2008, using the commercial passenger ship Volcán de Tauce of the Naviera Armas company several months, it was possible to obtain vertical profiles of temperature from expandable bathythermograph probes in eight stations across the Tenerife - La Gomera channel. With these data of temperature we have been estimated vertical sections of potential density and geostrophic transport with high spatial and temporal resolution (5 nm between stations, and one- two months between cruises). The seasonal variability obtained for the geostrophic transport in this channel shows important differences with others Canary Islands channels. From potential density and geostrophic velocity data we estimated the vertical diffusion coefficients and diapycnal diffusive fluxes, using a parameterization that depends of Richardson gradient number. In the center of the channel and close to La Gomera Island, we found higher values for these diffusive fluxes. Convergence and divergence of these fluxes requires further study so that we can draw conclusions about its impact on the distribution of nutrients in the study area and its impact in marine ecosystems. This work is being used in research projects TRAMIC and PROMECA.

  7. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2009-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  8. Optimal estimation retrieval of aerosol microphysical properties from SAGE~II satellite observations in the volcanically unperturbed lower stratosphere

    NASA Astrophysics Data System (ADS)

    Wurl, D.; Grainger, R. G.; McDonald, A. J.; Deshler, T.

    2010-05-01

    Stratospheric aerosol particles under non-volcanic conditions are typically smaller than 0.1 μm. Due to fundamental limitations of the scattering theory in the Rayleigh limit, these tiny particles are hard to measure by satellite instruments. As a consequence, current estimates of global aerosol properties retrieved from spectral aerosol extinction measurements tend to be strongly biased. Aerosol surface area densities, for instance, are observed to be about 40% smaller than those derived from correlative in situ measurements (Deshler et al., 2003). An accurate knowledge of the global distribution of aerosol properties is, however, essential to better understand and quantify the role they play in atmospheric chemistry, dynamics, radiation and climate. To address this need a new retrieval algorithm was developed, which employs a nonlinear Optimal Estimation (OE) method to iteratively solve for the monomodal size distribution parameters which are statistically most consistent with both the satellite-measured multi-wavelength aerosol extinction data and a priori information. By thus combining spectral extinction measurements (at visible to near infrared wavelengths) with prior knowledge of aerosol properties at background level, even the smallest particles are taken into account which are practically invisible to optical remote sensing instruments. The performance of the OE retrieval algorithm was assessed based on synthetic spectral extinction data generated from both monomodal and small-mode-dominant bimodal sulphuric acid aerosol size distributions. For monomodal background aerosol, the new algorithm was shown to fairly accurately retrieve the particle sizes and associated integrated properties (surface area and volume densities), even in the presence of large extinction uncertainty. The associated retrieved uncertainties are a good estimate of the true errors. In the case of bimodal background aerosol, where the retrieved (monomodal) size distributions naturally differ from the correct bimodal values, the associated surface area (A) and volume densities (V) are, nevertheless, fairly accurately retrieved, except at values larger than 1.0 μm2 cm-3 (A) and 0.05 μm3 cm-3 (V), where they tend to underestimate the true bimodal values. Due to the limited information content in the SAGE II spectral extinction measurements this kind of forward model error cannot be avoided here. Nevertheless, the retrieved uncertainties are a good estimate of the true errors in the retrieved integrated properties, except where the surface area density exceeds the 1.0 μm2 cm-3 threshold. When applied to near-global SAGE II satellite extinction measured in 1999 the retrieved OE surface area and volume densities are observed to be larger by, respectively, 20-50% and 10-40% compared to those estimates obtained by the SAGE~II operational retrieval algorithm. An examination of the OE algorithm biases with in situ data indicates that the new OE aerosol property estimates tend to be more realistic than previous estimates obtained from remotely sensed data through other retrieval techniques. Based on the results of this study we therefore suggest that the new Optimal Estimation retrieval algorithm is able to contribute to an advancement in aerosol research by considerably improving current estimates of aerosol properties in the lower stratosphere under low aerosol loading conditions.

  9. Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion

    NASA Technical Reports Server (NTRS)

    Wilson, L.; Self, S.

    1980-01-01

    Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.

  10. Neurite density from magnetic resonance diffusion measurements at ultrahigh field: Comparison with light microscopy and electron microscopy

    PubMed Central

    Jespersen, Sune N.; Bjarkam, Carsten R.; Nyengaard, Jens R.; Chakravarty, M. Mallar; Hansen, Brian; Vosegaard, Thomas; Østergaard, Leif; Yablonskiy, Dmitriy; Nielsen, Niels Chr.; Vestergaard-Poulsen, Peter

    2010-01-01

    Due to its unique sensitivity to tissue microstructure, diffusion-weighted magnetic resonance imaging (MRI) has found many applications in clinical and fundamental science. With few exceptions, a more precise correspondence between physiological or biophysical properties and the obtained diffusion parameters remain uncertain due to lack of specificity. In this work, we address this problem by comparing diffusion parameters of a recently introduced model for water diffusion in brain matter to light microscopy and quantitative electron microscopy. Specifically, we compare diffusion model predictions of neurite density in rats to optical myelin staining intensity and stereological estimation of neurite volume fraction using electron microscopy. We find that the diffusion model describes data better and that its parameters show stronger correlation with optical and electron microscopy, and thus reflect myelinated neurite density better than the more frequently used diffusion tensor imaging (DTI) and cumulant expansion methods. Furthermore, the estimated neurite orientations capture dendritic architecture more faithfully than DTI diffusion ellipsoids. PMID:19732836

  11. The application of a geometric optical canopy reflectance model to semiarid shrub vegetation

    NASA Technical Reports Server (NTRS)

    Franklin, Janet; Turner, Debra L.

    1992-01-01

    Estimates are obtained of the average plant size and density of shrub vegetation on the basis of SPOT High Resolution Visible Multispectral imagery from Chihuahuan desert areas, using the Li and Strahler (1985) model. The aggregated predictions for a number of stands within a class were accurate to within one or two standard errors of the observed average value. Accuracy was highest for those classes of vegetation where the nonrandom scrub pattern was characterized for the class on the basis of the average coefficient of the determination of density.

  12. Neutrinophilic two Higgs doublet model with dark matter under an alternative U(1)_{B-L} gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-03-01

    We propose a Dirac type active neutrino with rank two mass matrix and a Majorana fermion dark matter candidate with an alternative local U(1)_{B-L} extension of neutrinophilic two Higgs doublet model. Our dark matter candidate can be stabilized due to charge assignment under the gauge symmetry without imposing extra discrete Z_2 symmetry and the relic density is obtained from an Z' boson exchanging process. Taking into account collider constraints on the Z' boson mass and coupling, we estimate the relic density.

  13. Asset allocation using option-implied moments

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.; Tolos, S. M.

    2017-09-01

    This study uses an option-implied distribution as the input in asset allocation. The computation of risk-neutral densities (RND) are based on the Dow Jones Industrial Average (DJIA) index option and its constituents. Since the RNDs estimation does not incorporate risk premium, the conversion of RND into risk-world density (RWD) is required. The RWD is obtained through parametric calibration using the beta distributions. The mean, volatility, and covariance are then calculated to construct the portfolio. The performance of the portfolio is evaluated by using portfolio volatility and Sharpe ratio.

  14. Gibbs measures based on 1d (an)harmonic oscillators as mean-field limits

    NASA Astrophysics Data System (ADS)

    Lewin, Mathieu; Nam, Phan Thành; Rougerie, Nicolas

    2018-04-01

    We prove that Gibbs measures based on 1D defocusing nonlinear Schrödinger functionals with sub-harmonic trapping can be obtained as the mean-field/large temperature limit of the corresponding grand-canonical ensemble for many bosons. The limit measure is supported on Sobolev spaces of negative regularity, and the corresponding density matrices are not trace-class. The general proof strategy is that of a previous paper of ours, but we have to complement it with Hilbert-Schmidt estimates on reduced density matrices.

  15. Estimating Bat and Bird Mortality Occurring at Wind Energy Turbines from Covariates and Carcass Searches Using Mixture Models

    PubMed Central

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144

  16. Estimating bat and bird mortality occurring at wind energy turbines from covariates and carcass searches using mixture models.

    PubMed

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

  17. Population Pharmacokinetic/Pharmacodynamic Analysis of Alirocumab in Healthy Volunteers or Hypercholesterolemic Subjects Using an Indirect Response Model to Predict Low-Density Lipoprotein Cholesterol Lowering: Support for a Biologics License Application Submission: Part II.

    PubMed

    Nicolas, Xavier; Djebli, Nassim; Rauch, Clémence; Brunet, Aurélie; Hurbin, Fabrice; Martinez, Jean-Marie; Fabre, David

    2018-05-03

    Alirocumab, a human monoclonal antibody against proprotein convertase subtilisin/kexin type 9 (PCSK9), significantly lowers low-density lipoprotein cholesterol levels. This analysis aimed to develop and qualify a population pharmacokinetic/pharmacodynamic model for alirocumab based on pooled data obtained from 13 phase I/II/III clinical trials. From a dataset of 2799 individuals (14,346 low-density lipoprotein-cholesterol values), individual pharmacokinetic parameters from the population pharmacokinetic model presented in Part I of this series were used to estimate alirocumab concentrations. As a second step, we then developed the current population pharmacokinetic/pharmacodynamic model using an indirect response model with a Hill coefficient, parameterized with increasing low-density lipoprotein cholesterol elimination, to relate alirocumab concentrations to low-density lipoprotein cholesterol values. The population pharmacokinetic/pharmacodynamic model allowed the characterization of the pharmacokinetic/pharmacodynamic properties of alirocumab in the target population and estimation of individual low-density lipoprotein cholesterol levels and derived pharmacodynamic parameters (the maximum decrease in low-density lipoprotein cholesterol values from baseline and the difference between baseline low-density lipoprotein cholesterol and the pre-dose value before the next alirocumab dose). Significant parameter-covariate relationships were retained in the model, with a total of ten covariates (sex, age, weight, free baseline PCSK9, total time-varying PCSK9, concomitant statin administration, total baseline PCSK9, co-administration of high-dose statins, disease status) included in the final population pharmacokinetic/pharmacodynamic model to explain between-subject variability. Nevertheless, the high number of covariates included in the model did not have a clinically meaningful impact on model-derived pharmacodynamic parameters. This model successfully allowed the characterization of the population pharmacokinetic/pharmacodynamic properties of alirocumab in its target population and the estimation of individual low-density lipoprotein cholesterol levels.

  18. Estimating abundance of mountain lions from unstructured spatial sampling

    USGS Publications Warehouse

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.

  19. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data

    PubMed Central

    Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard. PMID:28448524

  20. Compaction of forest soil by logging machinery favours occurrence of prokaryotes.

    PubMed

    Schnurr-Pütz, Silvia; Bååth, Erland; Guggenberger, Georg; Drake, Harold L; Küsel, Kirsten

    2006-12-01

    Soil compaction caused by passage of logging machinery reduces the soil air capacity. Changed abiotic factors might induce a change in the soil microbial community and favour organisms capable of tolerating anoxic conditions. The goals of this study were to resolve differences between soil microbial communities obtained from wheel-tracks (i.e. compacted) and their adjacent undisturbed sites, and to evaluate differences in potential anaerobic microbial activities of these contrasting soils. Soil samples obtained from compacted soil had a greater bulk density and a higher pH than uncompacted soil. Analyses of phospholipid fatty acids demonstrated that the eukaryotic/prokaryotic ratio in compacted soils was lower than that of uncompacted soils, suggesting that fungi were not favoured by the in situ conditions produced by compaction. Indeed, most-probable-number (MPN) estimates of nitrous oxide-producing denitrifiers, acetate- and lactate-utilizing iron and sulfate reducers, and methanogens were higher in compacted than in uncompacted soils obtained from one site that had large differences in bulk density. Compacted soils from this site yielded higher iron-reducing, sulfate-reducing and methanogenic potentials than did uncompacted soils. MPN estimates of H2-utilizing acetogens in compacted and uncompacted soils were similar. These results indicate that compaction of forest soil alters the structure and function of the soil microbial community and favours occurrence of prokaryotes.

  1. Testing the gravitational instability hypothesis?

    NASA Technical Reports Server (NTRS)

    Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.

    1994-01-01

    We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.

  2. High urban population density of birds reflects their timing of urbanization.

    PubMed

    Møller, Anders Pape; Diaz, Mario; Flensted-Jensen, Einar; Grim, Tomas; Ibáñez-Álamo, Juan Diego; Jokimäki, Jukka; Mänd, Raivo; Markó, Gábor; Tryjanowski, Piotr

    2012-11-01

    Living organisms generally occur at the highest population density in the most suitable habitat. Therefore, invasion of and adaptation to novel habitats imply a gradual increase in population density, from that at or below what was found in the ancestral habitat to a density that may reach higher levels in the novel habitat following adaptation to that habitat. We tested this prediction of invasion biology by analyzing data on population density of breeding birds in their ancestral rural habitats and in matched nearby urban habitats that have been colonized recently across a continental latitudinal gradient. We estimated population density in the two types of habitats using extensive point census bird counts, and we obtained information on the year of urbanization when population density in urban habitats reached levels higher than that of the ancestral rural habitat from published records and estimates by experienced ornithologists. Both the difference in population density between urban and rural habitats and the year of urbanization were significantly repeatable when analyzing multiple populations of the same species across Europe. Population density was on average 30 % higher in urban than in rural habitats, although density reached as much as 100-fold higher in urban habitats in some species. Invasive urban bird species that colonized urban environments over a long period achieved the largest increases in population density compared to their ancestral rural habitats. This was independent of whether species were anciently or recently urbanized, providing a unique cross-validation of timing of urban invasions. These results suggest that successful invasion of urban habitats was associated with gradual adaptation to these habitats as shown by a significant increase in population density in urban habitats over time.

  3. Racial Differences in Quantitative Measures of Area and Volumetric Breast Density

    PubMed Central

    McCarthy, Anne Marie; Keller, Brad M.; Pantalone, Lauren M.; Hsieh, Meng-Kang; Synnestvedt, Marie; Conant, Emily F.; Armstrong, Katrina; Kontos, Despina

    2016-01-01

    Abstract Background: Increased breast density is a strong risk factor for breast cancer and also decreases the sensitivity of mammographic screening. The purpose of our study was to compare breast density for black and white women using quantitative measures. Methods: Breast density was assessed among 5282 black and 4216 white women screened using digital mammography. Breast Imaging-Reporting and Data System (BI-RADS) density was obtained from radiologists’ reports. Quantitative measures for dense area, area percent density (PD), dense volume, and volume percent density were estimated using validated, automated software. Breast density was categorized as dense or nondense based on BI-RADS categories or based on values above and below the median for quantitative measures. Logistic regression was used to estimate the odds of having dense breasts by race, adjusted for age, body mass index (BMI), age at menarche, menopause status, family history of breast or ovarian cancer, parity and age at first birth, and current hormone replacement therapy (HRT) use. All statistical tests were two-sided. Results: There was a statistically significant interaction of race and BMI on breast density. After accounting for age, BMI, and breast cancer risk factors, black women had statistically significantly greater odds of high breast density across all quantitative measures (eg, PD nonobese odds ratio [OR] = 1.18, 95% confidence interval [CI] = 1.02 to 1.37, P = .03, PD obese OR = 1.26, 95% CI = 1.04 to 1.53, P = .02). There was no statistically significant difference in BI-RADS density by race. Conclusions: After accounting for age, BMI, and other risk factors, black women had higher breast density than white women across all quantitative measures previously associated with breast cancer risk. These results may have implications for risk assessment and screening. PMID:27130893

  4. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning.

    PubMed

    Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A; Roubidoux, Marilyn A; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M; Samala, Ravi K

    2018-01-09

    Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm  ×  800 µm from 100 µm  ×  100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79  ±  0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC  =  0.72  ±  0.18 and r  =  0.85. For the independent test set, DCNN achieved DC  =  0.76  ±  0.09 and r  =  0.94, while feature-based learning achieved DC  =  0.62  ±  0.21 and r  =  0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.

  5. Enhancing active and passive remote sensing in the ocean using broadband acoustic transmissions and coherent hydrophone arrays

    NASA Astrophysics Data System (ADS)

    Tran, Duong Duy

    The statistics of broadband acoustic signal transmissions in a random continental shelf waveguide are characterized for the fully saturated regime. The probability distribution of broadband signal energies after saturated multi-path propagation is derived using coherence theory. The frequency components obtained from Fourier decomposition of a broadband signal are each assumed to be fully saturated, where the energy spectral density obeys the exponential distribution with 5.6 dB standard deviation and unity scintillation index. When the signal bandwidth and measurement time are respectively larger than the correlation bandwidth and correlation time of its energy spectral density components, the broadband signal energy obtained by integrating the energy spectral density across the signal bandwidth then follows the Gamma distribution with standard deviation smaller than 5.6 dB and scintillation index less than unity. The theory is verified with broadband transmissions in the Gulf of Maine shallow water waveguide in the 300-1200 Hz frequency range. The standard deviations of received broadband signal energies range from 2.7 to 4.6 dB for effective bandwidths up to 42 Hz, while the standard deviations of individual energy spectral density components are roughly 5.6 dB. The energy spectral density correlation bandwidths of the received broadband signals are found to be larger for signals with higher center frequency. Sperm whales in the New England continental shelf and slope were passively localized, in both range and bearing using a single low-frequency (< 2500 Hz), densely sampled, towed horizontal coherent hydrophone array system. Whale bearings were estimated using time-domain beamforming that provided high coherent array gain in sperm whale click signal-to-noise ratio. Whale ranges from the receiver array center were estimated using the moving array triangulation technique from a sequence of whale bearing measurements. The dive profile was estimated for a sperm whale in the shallow waters of the Gulf of Maine with 160 m water-column depth, located close to the array's near-field where depth estimation was feasible by employing time difference of arrival of the direct and multiply reflected click signals received on the array. The dependence of broadband energy on bandwidth and measurement time was verified employing recorded sperm whale clicks in the Gulf of Maine.

  6. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning

    NASA Astrophysics Data System (ADS)

    Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M.; Samala, Ravi K.

    2018-01-01

    Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input ‘for processing’ DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm  ×  800 µm from 100 µm  ×  100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice’s coefficient (DC) of 0.79  ±  0.13 and Pearson’s correlation (r) of 0.97, whereas feature-based learning obtained DC  =  0.72  ±  0.18 and r  =  0.85. For the independent test set, DCNN achieved DC  =  0.76  ±  0.09 and r  =  0.94, while feature-based learning achieved DC  =  0.62  ±  0.21 and r  =  0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.

  7. Estimates of volumetric bone density from projectional measurements improve the discriminatory capability of dual X-ray absorptiometry

    NASA Technical Reports Server (NTRS)

    Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

    1995-01-01

    To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.

  8. Identification of the population density of a species model with nonlocal diffusion and nonlinear reaction

    NASA Astrophysics Data System (ADS)

    Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel

    2017-05-01

    The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.

  9. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  10. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  11. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  14. Rapid assessment of rice seed availability for wildlife in harvested fields

    USGS Publications Warehouse

    Halstead, B.J.; Miller, M.R.; Casazza, Michael L.; Coates, P.S.; Farinha, M.A.; Benjamin, Gustafson K.; Yee, J.L.; Fleskes, J.P.

    2011-01-01

    Rice seed remaining in commercial fields after harvest (waste rice) is a critical food resource for wintering waterfowl in rice-growing regions of North America. Accurate and precise estimates of the seed mass density of waste rice are essential for planning waterfowl wintering habitat extents and management. In the Sacramento Valley of California, USA, the existing method for obtaining estimates of availability of waste rice in harvested fields produces relatively precise estimates, but the labor-, time-, and machineryintensive process is not practical for routine assessments needed to examine long-term trends in waste rice availability. We tested several experimental methods designed to rapidly derive estimates that would not be burdened with disadvantages of the existing method. We first conducted a simulation study of the efficiency of each method and then conducted field tests. For each approach, methods did not vary in root mean squared error, although some methods did exhibit bias for both simulations and field tests. Methods also varied substantially in the time to conduct each sample and in the number of samples required to detect a standard trend. Overall, modified line-intercept methods performed well for estimating the density of rice seeds. Waste rice in the straw, although not measured directly, can be accounted for by a positive relationship with density of rice on the ground. Rapid assessment of food availability is a useful tool to help waterfowl managers establish and implement wetland restoration and agricultural habitat-enhancement goals for wintering waterfowl. ?? 2011 The Wildlife Society.

  15. Estimating Brownian motion dispersal rate, longevity and population density from spatially explicit mark-recapture data on tropical butterflies.

    PubMed

    Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J

    2012-07-01

    1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  16. Sampling procedures for inventory of commercial volume tree species in Amazon Forest.

    PubMed

    Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R

    2017-01-01

    The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.

  17. Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data

    USGS Publications Warehouse

    Bakun, W.H.; Gomez, Capera A.; Stucchi, M.

    2011-01-01

    Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.

  18. Insights into Spray Development from Metered-Dose Inhalers Through Quantitative X-ray Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason-Smith, Nicholas; Duke, Daniel J.; Kastengren, Alan L.

    Typical methods to study pMDI sprays employ particle sizing or visible light diagnostics, which suffer in regions of high spray density. X-ray techniques can be applied to pharmaceutical sprays to obtain information unattainable by conventional particle sizing and light-based techniques. We present a technique for obtaining quantitative measurements of spray density in pMDI sprays. A monochromatic focused X-ray beam was used to perform quantitative radiography measurements in the near-nozzle region and plume of HFA-propelled sprays. Measurements were obtained with a temporal resolution of 0.184 ms and spatial resolution of 5 mu m. Steady flow conditions were reached after around 30more » ms for the formulations examined with the spray device used. Spray evolution was affected by the inclusion of ethanol in the formulation and unaffected by the inclusion of 0.1% drug by weight. Estimation of the nozzle exit density showed that vapour is likely to dominate the flow leaving the inhaler nozzle during steady flow. Quantitative measurements in pMDI sprays allow the determination of nozzle exit conditions that are difficult to obtain experimentally by other means. Measurements of these nozzle exit conditions can improve understanding of the atomization mechanisms responsible for pMDI spray droplet and particle formation.« less

  19. Galaxy clustering with photometric surveys using PDF redshift information

    DOE PAGES

    Asorey, J.; Carrasco Kind, M.; Sevilla-Noarbe, I.; ...

    2016-03-28

    Here, photometric surveys produce large-area maps of the galaxy distribution, but with less accurate redshift information than is obtained from spectroscopic methods. Modern photometric redshift (photo-z) algorithms use galaxy magnitudes, or colors, that are obtained through multi-band imaging to produce a probability density function (PDF) for each galaxy in the map. We used simulated data to study the effect of using different photo-z estimators to assign galaxies to redshift bins in order to compare their effects on angular clustering and galaxy bias measurements. We found that if we use the entire PDF, rather than a single-point (mean or mode) estimate, the deviations are less biased, especially when using narrow redshift bins. When the redshift bin widths aremore » $$\\Delta z=0.1$$, the use of the entire PDF reduces the typical measurement bias from 5%, when using single point estimates, to 3%.« less

  20. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less

  1. Fish community changes in the St. Louis River estuary, Lake Superior, 1989-1996: Is it ruffe or population dynamics?

    USGS Publications Warehouse

    Bronte, Charles R.; Evrard, Lori M.; Brown, William P.; Mayo, Kathleen R.; Edwards, Andrew J.

    1998-01-01

    Ruffe (Gymnocephalus cernuus) have been implicated in density declines of native species through egg predation and competition for food in some European waters where they were introduced. Density estimates for ruffe and principal native fishes in the St. Louis River estuary (western Lake Superior) were developed for 1989 to 1996 to measure changes in the fish community in response to an unintentional introduction of ruffe. During the study, ruffe density increased and the densities of several native species decreased. The reductions of native stocks to the natural population dynamics of the same species from Chequamegon Bay, Lake Superior (an area with very few ruffe) were developed, where there was a 24-year record of density. Using these data, short- and long-term variations in catch and correlations among species within years were compared, and species-specific distributions were developed of observed trends in abundance of native fishes in Chequamegon Bay indexed by the slopes of densities across years. From these distributions and our observed trend-line slopes from the St. Louis River, probabilities of measuring negative change at the magnitude observed in the St. Louis River were estimated. Compared with trends in Chequamegon Bay, there was a high probability of obtaining the negative slopes measured for most species, which suggests natural population dynamics could explain, the declines rather than interactions with ruffe. Variable recruitment, which was not related to ruffe density, and associated density-dependent changes in mortality likely were responsible for density declines of native species.

  2. An empirical model to estimate density of sodium hydroxide solution: An activator of geopolymer concretes

    NASA Astrophysics Data System (ADS)

    Rajamane, N. P.; Nataraja, M. C.; Jeyalakshmi, R.; Nithiyanantham, S.

    2016-02-01

    Geopolymer concrete is zero-Portland cement concrete containing alumino-silicate based inorganic polymer as binder. The polymer is obtained by chemical activation of alumina and silica bearing materials, blast furnace slag by highly alkaline solutions such as hydroxide and silicates of alkali metals. Sodium hydroxide solutions of different concentrations are commonly used in making GPC mixes. Often, it is seen that sodium hydroxide solution of very high concentration is diluted with water to obtain SHS of desired concentration. While doing so it was observed that the solute particles of NaOH in SHS tend to occupy lower volumes as the degree of dilution increases. This aspect is discussed in this paper. The observed phenomenon needs to be understood while formulating the GPC mixes since this influences considerably the relationship between concentration and density of SHS. This paper suggests an empirical formula to relate density of SHS directly to concentration expressed by w/w.

  3. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  4. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles.

  5. Kepler-454b: Rocky or Not?

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-02-01

    Small exoplanets tend to fall into two categories: the smallest ones are predominantly rocky, like Earth, and the larger ones have a lower-density, more gaseous composition, similar to Neptune. The planet Kepler-454b was initially estimated to fall between these two groups in radius. So what is its composition?Small-Planet DichotomyThough Kepler has detected thousands of planet candidates with radii between 1 and 2.7 Earth radii, we have only obtained precise mass measurements for 12 of these planets.Mass-radius diagram (click for a closer look!) for planets with radius 2.7 Earth radii and well-measured masses. The six smallest planets (and Venus and Earth) fall along a single mass-radius curve of Earth-like composition. The six larger planets (including Kepler-454b) have lower-density compositions. [Gettel et al. 2016]These measurements, however, show an interesting dichotomy: planets with radii less than 1.6 Earth radii have rocky, Earth-like compositions, following a single relation between their mass and radius. Planets between 2 and 2.7 Earth radii, however, have lower densities and dont follow a single mass-radius relation. Their low densities suggest they contain a significant fraction of volatiles, likely in the form of a thick gas envelope of water, hydrogen, and/or helium.The planet Kepler-454b, discovered transiting a Sun-like star, was initially estimated to have a radius of 1.86 Earth radii placing it in between these two categories. A team of astronomers led by Sara Gettel (Harvard-Smithsonian Center for Astrophysics) have since followed up on the initial Kepler detection, hoping to determine the planets composition.Low-Density OutcomeGettel and collaborators obtained 63 observations of the host stars radial velocity with the HARPS-N spectrograph on the Telescopio Nazionale Galileo, and another 36 observations with the HIRES spectrograph at Keck Observatory. These observations allowed them to do several things:Obtain a more accurate radius estimate for Kepler-454b: 2.37 Earth radii.Measure the planets mass: roughly 6.8 Earth masses.Discover surprise! two other, non-transiting companions in the system: Kepler-454c, a planet with a minimum mass of ~4.5 Jupiter masses on a 524-day orbit, and Kepler-454d, a more distant (10-year orbit) brown dwarf or low-mass star.Kepler-454bs newly measured size and mass place it firmly in the category of non-rocky, larger, less dense planets (the authors calculate a density of ~2.76 g/cm3, or roughly half that of Earth). This seems to reinforce the idea that rocky planets dont grow larger than ~1.6 Earth radii, and planets with mass greater than about 6 Earth masses are typically low-density and/or swathed in an envelope of gas.The authors point out that future observing missions like NASA TESS (launching in 2017) will provide more targets that can be followed up to obtain mass measurements, allowing us to determine if this trend in mass and radius holds up in a larger sample.CitationSara Gettel et al 2016 ApJ 816 95. doi:10.3847/0004-637X/816/2/95

  6. An evaluation of three-dimensional photogrammetric and morphometric techniques for estimating volume and mass in Weddell seals Leptonychotes weddellii

    PubMed Central

    Ruscher-Hill, Brandi; Kirkham, Amy L.; Burns, Jennifer M.

    2018-01-01

    Body mass dynamics of animals can indicate critical associations between extrinsic factors and population vital rates. Photogrammetry can be used to estimate mass of individuals in species whose life histories make it logistically difficult to obtain direct body mass measurements. Such studies typically use equations to relate volume estimates from photogrammetry to mass; however, most fail to identify the sources of error between the estimated and actual mass. Our objective was to identify the sources of error that prevent photogrammetric mass estimation from directly predicting actual mass, and develop a methodology to correct this issue. To do this, we obtained mass, body measurements, and scaled photos for 56 sedated Weddell seals (Leptonychotes weddellii). After creating a three-dimensional silhouette in the image processing program PhotoModeler Pro, we used horizontal scale bars to define the ground plane, then removed the below-ground portion of the animal’s estimated silhouette. We then re-calculated body volume and applied an expected density to estimate animal mass. We compared the body mass estimates derived from this silhouette slice method with estimates derived from two other published methodologies: body mass calculated using photogrammetry coupled with a species-specific correction factor, and estimates using elliptical cones and measured tissue densities. The estimated mass values (mean ± standard deviation 345±71 kg for correction equation, 346±75 kg for silhouette slice, 343±76 kg for cones) were not statistically distinguishable from each other or from actual mass (346±73 kg) (ANOVA with Tukey HSD post-hoc, p>0.05 for all pairwise comparisons). We conclude that volume overestimates from photogrammetry are likely due to the inability of photo modeling software to properly render the ventral surface of the animal where it contacts the ground. Due to logistical differences between the “correction equation”, “silhouette slicing”, and “cones” approaches, researchers may find one technique more useful for certain study programs. In combination or exclusively, these three-dimensional mass estimation techniques have great utility in field studies with repeated measures sampling designs or where logistic constraints preclude weighing animals. PMID:29320573

  7. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  8. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  9. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.

  10. Density Estimation for New Solid and Liquid Explosives

    DTIC Science & Technology

    1977-02-17

    The group additivity approach was shown to be applicable to density estimation. The densities of approximately 180 explosives and related compounds... of very diverse compositions were estimated, and almost all the estimates were quite reasonable. Of the 168 compounds for which direct comparisons...could be made (see Table 6), 36.9% of the estimated densities were within 1% of the measured densities, 33.3% were within 1-2%, 11.9% were within 2-3

  11. Bayesian inference of Calibration curves: application to archaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lanos, P.

    2003-04-01

    The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.

  12. Estimating crustal thickness and Vp/Vs ratio with joint constraints of receiver function and gravity data

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai

    2018-05-01

    The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.

  13. Ecological Monitoring and Health Research in Luambe National Park, Zambia: Generation of Baseline Data Layers.

    PubMed

    Anderson, Neil E; Bessell, Paul R; Mubanga, Joseph; Thomas, Robert; Eisler, Mark C; Fèvre, Eric M; Welburn, Susan C

    2016-09-01

    Classifying, describing and understanding the natural environment is an important element of studies of human, animal and ecosystem health, and baseline ecological data are commonly lacking in remote environments of the world. Human African trypanosomiasis is an important constraint on human well-being in sub-Saharan Africa, and spillover transmission occurs from the reservoir community of wild mammals. Here we use robust and repeatable methodology to generate baseline datasets on vegetation and mammal density to investigate the ecology of warthogs (Phacochoerus africanus) in the remote Luambe National Park in Zambia, in order to further our understanding of their interactions with tsetse (Glossina spp.) vectors of trypanosomiasis. Fuzzy set theory is used to produce an accurate landcover classification, and distance sampling techniques are applied to obtain species and habitat level density estimates for the most abundant wild mammals. The density of warthog burrows is also estimated and their spatial distribution mapped. The datasets generated provide an accurate baseline to further ecological and epidemiological understanding of disease systems such as trypanosomiasis. This study provides a reliable framework for ecological monitoring of wild mammal densities and vegetation composition in remote, relatively inaccessible environments.

  14. Measurement of Average Aggregate Density by Sedimentation and Brownian Motion Analysis.

    PubMed

    Cavicchi, Richard E; King, Jason; Ripple, Dean C

    2018-05-01

    The spatially averaged density of protein aggregates is an important parameter that can be used to relate size distributions measured by orthogonal methods, to characterize protein particles, and perhaps to estimate the amount of protein in aggregate form in a sample. We obtained a series of images of protein aggregates exhibiting Brownian diffusion while settling under the influence of gravity in a sealed capillary. The aggregates were formed by stir-stressing a monoclonal antibody (NISTmAb). Image processing yielded particle tracks, which were then examined to determine settling velocity and hydrodynamic diameter down to 1 μm based on mean square displacement analysis. Measurements on polystyrene calibration microspheres ranging in size from 1 to 5 μm showed that the mean square displacement diameter had improved accuracy over the diameter derived from imaged particle area, suggesting a future method for correcting size distributions based on imaging. Stokes' law was used to estimate the density of each particle. It was found that the aggregates were highly porous with density decreasing from 1.080 to 1.028 g/cm 3 as the size increased from 1.37 to 4.9 μm. Published by Elsevier Inc.

  15. Monolayer-crystal streptavidin support films provide an internal standard of cryo-EM image quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Bong-Gyoon; Watson, Zoe; Cate, Jamie H. D.

    Analysis of images of biotinylated Escherichia coli 70S ribosome particles, bound to streptavidin affinity grids, demonstrates that the image-quality of particles can be predicted by the image-quality of the monolayer crystalline support film. Also, the quality of the Thon rings is a good predictor of the image-quality of particles, but only when images of the streptavidin crystals extend to relatively high resolution. When the estimated resolution of streptavidin was 5 Å or worse, for example, the ribosomal density map obtained from 22,697 particles went to only 9.5 Å, while the resolution of the map reached 4.0 Å for the samemore » number of particles, when the estimated resolution of streptavidin crystal was 4 Å or better. It thus is easy to tell which images in a data set ought to be retained for further work, based on the highest resolution seen for Bragg peaks in the computed Fourier transforms of the streptavidin component. The refined density map obtained from 57,826 particles obtained in this way extended to 3.6 Å, a marked improvement over the value of 3.9 Å obtained previously from a subset of 52,433 particles obtained from the same initial data set of 101,213 particles after 3-D classification. These results are consistent with the hypothesis that interaction with the air-water interface can damage particles when the sample becomes too thin. Finally, streptavidin monolayer crystals appear to provide a good indication of when that is the case.« less

  16. Ionospheric irregularity characteristics from quasiperiodic structure in the radio wave scintillation

    NASA Astrophysics Data System (ADS)

    Chen, K. Y.; Su, S. Y.; Liu, C. H.; Basu, S.

    2005-06-01

    Quasiperiodic (QP) diffraction pattern in scintillation patches has been known to highly correlate with the edge structures of a plasma bubble (Franke et al., 1984). A new time-frequency analysis method of Hilbert-Huang transform (HHT) has been applied to analyze the scintillation data taken at Ascension Island to understand the characteristics of corresponding ionosphere irregularities. The HHT method enables us to extract the quasiperiodic diffraction signals embedded inside the scintillation data and to obtain the characteristics of such diffraction signals. The cross correlation of the two sets of diffraction signals received by two stations at each end of Ascension Island indicates that the density irregularity pattern that causes the diffraction pattern should have an eastward drift velocity of ˜130 m/s. The HHT analysis of the instantaneous frequency in the QP diffraction patterns also reveals some frequency shifts in their peak frequencies. For the QP diffraction pattern caused by the leading edge of the large density gradient at the east wall of a structured bubble, an ascending note in the peak frequency is observed, and for the trailing edge a descending note is observed. The linear change in the transient of the peak frequency in the QP diffraction pattern is consistent with the theory and the simulation result of Franke et al. Estimate of the slope in the transient frequency provides us the information that allows us to identify the locations of plasma walls, and the east-west scale of the irregularity can be estimated. In our case we obtain about 24 km in the east-west scale. Furthermore, the height location of density irregularities that cause the diffraction pattern is estimated to be between 310 and 330 km, that is, around the F peak during observation.

  17. Accuracy of estimation of genomic breeding values in pigs using low-density genotypes and imputation.

    PubMed

    Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P

    2014-04-16

    Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.

  18. Early Universe synthesis of asymmetric dark matter nuggets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.

    We compute the mass function of bound states of asymmetric dark matter - nuggets - synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.

  19. Early Universe synthesis of asymmetric dark matter nuggets

    DOE PAGES

    Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.

    2018-02-12

    We compute the mass function of bound states of asymmetric dark matter - nuggets - synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.

  20. Early Universe synthesis of asymmetric dark matter nuggets

    NASA Astrophysics Data System (ADS)

    Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.

    2018-02-01

    We compute the mass function of bound states of asymmetric dark matter—nuggets—synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.

  1. Analytical methods for measuring the parameters of interstellar gas using methanol observations

    NASA Astrophysics Data System (ADS)

    Kalenskii, S. V.; Kurtz, S.

    2016-08-01

    The excitation of methanol in the absence of external radiation is analyzed, and LTE methods for probing interstellar gas considered. It is shown that rotation diagrams correctly estimate the gas kinetic temperature only if they are constructed using lines whose upper levels are located in the same K-ladders, such as the J 0- J -1 E lines at 157 GHz, the J 1- J 0 E lines at 165 GHz, and the J 2- J 1 E lines at 25 GHz. The gas density must be no less than 107 cm-3. Rotation diagrams constructed from lines with different K values for their upper levels (e.g., 2 K -1 K at 96 GHz, 3 K -2 K at 145 GHz, 5 K -4 K at 241 GHz) significantly underestimate the temperature, but enable estimation of the density. In addition, diagrams based on the 2 K -1 K lines can be used to estimate the methanol column density within a factor of about two to five. It is suggested that rotation diagrams should be used in the following manner. First, two rotation diagrams should be constructed, one from the lines at 96, 145, or 241 GHz, and another from the lines at 157, 165, or 25 GHz. The former diagram is used to estimate the gas density. If the density is about 107 cm-3 or higher, the latter diagram reproduces the temperature fairly well. If the density is around 106 cm-3, the temperature obtained from the latter diagram should be multiplied by a factor of 1.5-2. If the density is about 105 cm-3 or lower, then the latter diagram yields a temperature that is lower than the kinetic temperature by a factor of three or more, and should be used only as a lower limit for the kinetic temperature. The errors in the methanol column density determined from the integrated intensity of a single line can be more than an order of magnitude, even when the gas temperature is well known. However, if the J 0-( J - 1)0 E lines, as well as the J 1-( J - 1)1 A + or A - lines are used, the relative error in the column density is no more than a factor of a few.

  2. Intercomparison of gamma scattering, gammatography, and radiography techniques for mild steel nonuniform corrosion detection

    NASA Astrophysics Data System (ADS)

    Priyada, P.; Margret, M.; Ramar, R.; Shivaramu, Menaka, M.; Thilagam, L.; Venkataraman, B.; Raj, Baldev

    2011-03-01

    This paper focuses on the mild steel (MS) corrosion detection and intercomparison of results obtained by gamma scattering, gammatography, and radiography techniques. The gamma scattering non-destructive evaluation (NDE) method utilizes scattered gamma radiation for the detection of corrosion, and the scattering experimental setup is an indigenously designed automated personal computer (PC) controlled scanning system consisting of computerized numerical control (CNC) controlled six-axis source detector system and four-axis job positioning system. The system has been successfully used to quantify the magnitude of corrosion and the thickness profile of a MS plate with nonuniform corrosion, and the results are correlated with those obtained from the conventional gammatography and radiography imaging measurements. A simple and straightforward reconstruction algorithm to reconstruct the densities of the objects under investigation and an unambiguous interpretation of the signal as a function of material density at any point of the thick object being inspected is described. In this simple and straightforward method the density of the target need not be known and only the knowledge of the target material's mass attenuation coefficients (composition) for the incident and scattered energies is enough to reconstruct the density of the each voxel of the specimen being studied. The Monte Carlo (MC) numerical simulation of the phenomena is done using the Monte Carlo N-Particle Transport Code (MCNP) and the quantitative estimates of the values of signal-to-noise ratio for different percentages of MS corrosion derived from these simulations are presented and the spectra are compared with the experimental data. The gammatography experiments are carried out using the same PC controlled scanning system in a narrow beam, good geometry setup, and the thickness loss is estimated from the measured transmitted intensity. Radiography of the MS plates is carried out using 160 kV x-ray machine. The digitized radiographs with a resolution of 50 μm are processed for the detection of corrosion damage in five different locations. The thickness losses due to the corrosion of the MS plate obtained by gamma scattering method are compared with those values obtained by gammatography and radiography techniques. The percentage thickness loss estimated at different positions of the corroded MS plate varies from 17.78 to 27.0, from 18.9 to 24.28, and from 18.9 to 24.28 by gamma scattering, gammatography, and radiography techniques, respectively. Overall, these results are consistent and in line with each other.

  3. SU-F-J-213: Feasibility Study of Using a Dual-Energy Cone Beam CT (DECBCT) in Proton Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, H; Xing, L; Kanehira, T

    2016-06-15

    Purpose: The aim of this study is to evaluate the feasibility of using a dual-energy CBCT (DECBCT) in proton therapy treatment planning to allow for accurate electron density estimation. Methods: For direct comparison, two scenarios were selected: a dual-energy fan-beam CT (high: 140 kVp, low: 80 kVp) and a DECBCT (high: 125 kVp, low: 80 kVp). A Gammex 467 tissue characterization phantom was used, including the rods of air, water, bone (B2–30% mineral), cortical bone (SB3), lung (LN-300), brain, liver and adipose. For the CBCT, Hounsfield Unit (HU) numbers were first obtained from the reconstructed images after a calibration wasmore » made based on water (=0) and air materials (=−1000). For each tissue surrogate, region-of-interest (ROI) analyses were made to derive high-energy and low-energy HU values (HUhigh and HUlow), which were subsequently used to estimate electron density based on the algorithm as previously described by Hunemohr N., et al. Parameters k1 and k2 are energy dependent and can be derived from calibration materials. Results: While for the dual-energy FBCT, the electron density is found be within +/−3% error relative to the values provided by the phantom vendor: −1.8% (water), 0.03% (lung), 1.1% (brain), −2.82% (adipose), −0.49% (liver) and −1.89% (cortical bones). While for the DECBCT, the estimation of electron density exhibits a relatively larger variation: −1.76% (water), −36.7% (lung), −1.92% (brain), −3.43% (adipose), 8.1% (liver) and 9.5% (cortical bones). Conclusion: For DECBCT, the accuracy of electron density estimation is inferior to that of a FBCT, especially for materials of either low-density (lung) or high density (cortical bone) compared to water. Such limitation arises from inaccurate HU number derivation in a CBCT. Advanced scatter-correction and HU calibration routines, as well as the deployment of photon counting CT detectors need be investigated to minimize the difference between FBCT and CBCT.« less

  4. Rényi continuous entropy of DNA sequences.

    PubMed

    Vinga, Susana; Almeida, Jonas S

    2004-12-07

    Entropy measures of DNA sequences estimate their randomness or, inversely, their repeatability. L-block Shannon discrete entropy accounts for the empirical distribution of all length-L words and has convergence problems for finite sequences. A new entropy measure that extends Shannon's formalism is proposed. Renyi's quadratic entropy calculated with Parzen window density estimation method applied to CGR/USM continuous maps of DNA sequences constitute a novel technique to evaluate sequence global randomness without some of the former method drawbacks. The asymptotic behaviour of this new measure was analytically deduced and the calculation of entropies for several synthetic and experimental biological sequences was performed. The results obtained were compared with the distributions of the null model of randomness obtained by simulation. The biological sequences have shown a different p-value according to the kernel resolution of Parzen's method, which might indicate an unknown level of organization of their patterns. This new technique can be very useful in the study of DNA sequence complexity and provide additional tools for DNA entropy estimation. The main MATLAB applications developed and additional material are available at the webpage . Specialized functions can be obtained from the authors.

  5. Stochastic Model of Seasonal Runoff Forecasts

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, Roman; Watada, Leslie M.

    1986-03-01

    Each year the National Weather Service and the Soil Conservation Service issue a monthly sequence of five (or six) categorical forecasts of the seasonal snowmelt runoff volume. To describe uncertainties in these forecasts for the purposes of optimal decision making, a stochastic model is formulated. It is a discrete-time, finite, continuous-space, nonstationary Markov process. Posterior densities of the actual runoff conditional upon a forecast, and transition densities of forecasts are obtained from a Bayesian information processor. Parametric densities are derived for the process with a normal prior density of the runoff and a linear model of the forecast error. The structure of the model and the estimation procedure are motivated by analyses of forecast records from five stations in the Snake River basin, from the period 1971-1983. The advantages of supplementing the current forecasting scheme with a Bayesian analysis are discussed.

  6. A spatial analysis of the association between restaurant density and body mass index in Canadian adults.

    PubMed

    Hollands, Simon; Campbell, M Karen; Gilliland, Jason; Sarma, Sisira

    2013-10-01

    To investigate the association between fast-food restaurant density and adult body mass index (BMI) in Canada. Individual-level BMI and confounding variables were obtained from the 2007-2008 Canadian Community Health Survey master file. Locations of the fast-food and full-service chain restaurants and other non-chain restaurants were obtained from the 2008 Infogroup Canada business database. Food outlet density (fast-food, full-service and other) per 10,000 population was calculated for each Forward Sortation Area (FSA). Global (Moran's I) and local indicators of spatial autocorrelation of BMI were assessed. Ordinary least squares (OLS) and spatial auto-regressive error (SARE) methods were used to assess the association between local food environment and adult BMI in Canada. Global and local spatial autocorrelation of BMI were found in our univariate analysis. We found that OLS and SARE estimates were very similar in our multivariate models. An additional fast-food restaurant per 10,000 people at the FSA-level is associated with a 0.022kg/m(2) increase in BMI. On the other hand, other restaurant density is negatively related to BMI. Fast-food restaurant density is positively associated with BMI in Canada. Results suggest that restricting availability of fast-food in local neighborhoods may play a role in obesity prevention. © 2013.

  7. Saturn's ionosphere - Inferred electron densities

    NASA Technical Reports Server (NTRS)

    Kaiser, M. L.; Desch, M. D.; Connerney, J. E. P.

    1984-01-01

    During the two Voyager encounters with Saturn, radio bursts were detected which appear to have originated from atmospheric lightning storms. Although these bursts generally extended over frequencies from as low as 100 kHz to the upper detection limit of the instrument, 40 MHz, they often exhibited a sharp but variable low frequency cutoff below which bursts were not detected. We interpret the variable low-frequency extent of these bursts to be due to the reflection of the radio waves as they propagate through an ionosphere which varies with local time. We obtain estimates of electron densities at a variety of latitude and local time locations. These compare well with the dawn and dusk densities measured by the Pioneer 11 Voyager Radio Science investigations, and with model predictions for dayside densities. However, we infer a two-order-of-magnitude diurnal variation of electron density, which had not been anticipated by theoretical models of Saturn's ionosphere, and an equally dramatic extinction of ionospheric electron density by Saturn's rings. Previously announced in STAR as N84-17102

  8. Saturn's ionosphere: Inferred electron densities

    NASA Technical Reports Server (NTRS)

    Kaiser, M. L.; Desch, M. D.; Connerney, J. E. P.

    1983-01-01

    During the two Voyager encounters with Saturn, radio bursts were detected which appear to have originated from atmospheric lightning storms. Although these bursts generally extended over frequencies from as low as 100 kHz to the upper detection limit of the instrument, 40 MHz, they often exhibited a sharp but variable low frequency cutoff below which bursts were not detected. We interpret the variable low-frequency extent of these bursts to be due to the reflection of the radio waves as they propagate through an ionosphere which varies with local time. We obtain estimates of electron densities at a variety of latitude and local time locations. These compare well with the dawn and dusk densitis measured by the Pioneer 11 Voyager Radio Science investigations, and with model predictions for dayside densities. However, we infer a two-order-of-magnitude diurnal variation of electron density, which had not been anticipated by theoretical models of Saturn's ionosphere, and an equally dramatic extinction of ionospheric electron density by Saturn's rings.

  9. [Gypsy moth Lymantria dispar L. in the South Urals: Patterns in population dynamics and modelling].

    PubMed

    Soukhovolsky, V G; Ponomarev, V I; Sokolov, G I; Tarasova, O V; Krasnoperova, P A

    2015-01-01

    The analysis is conducted on population dynamics of gypsy moth from different habitats of the South Urals. The pattern of cyclic changes in population density is examined, the assessment of temporal conjugation in time series of gypsy moth population dynamics from separate habitats of the South Urals is carried out, the relationships between population density and weather conditions are studied. Based on the results obtained, a statistical model of gypsy moth population dynamics in the South Urals is designed, and estimations are given of regulatory and modifying factors effects on the population dynamics.

  10. Bayesian spatiotemporal analysis of zero-inflated biological population density data by a delta-normal spatiotemporal additive model.

    PubMed

    Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco

    2016-03-01

    We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Feasibility of and agreement between MR imaging and spectroscopic estimation of hepatic proton density fat fraction in children with known or suspected nonalcoholic fatty liver disease.

    PubMed

    Achmad, Emil; Yokoo, Takeshi; Hamilton, Gavin; Heba, Elhamy R; Hooker, Jonathan C; Changchien, Christopher; Schroeder, Michael; Wolfson, Tanya; Gamst, Anthony; Schwimmer, Jeffrey B; Lavine, Joel E; Sirlin, Claude B; Middleton, Michael S

    2015-10-01

    To assess feasibility of and agreement between magnetic resonance imaging (MRI) and magnetic resonance spectroscopy (MRS) for estimating hepatic proton density fat fraction (PDFF) in children with known or suspected nonalcoholic fatty liver disease (NAFLD). Children were included in this study from two previous research studies in each of which three MRI and three MRS acquisitions were obtained. Sequence acceptability, and MRI- and MRS-estimated PDFF were evaluated. Agreement of MRI- with MRS-estimated hepatic PDFF was assessed by linear regression and Bland-Altman analysis. Age, sex, BMI-Z score, acquisition time, and artifact score effects on MRI- and MRS-estimated PDFF agreement were assessed by multiple linear regression. Eighty-six children (61 boys and 25 girls) were included in this study. Slope and intercept from regressing MRS-PDFF on MRI-PDFF were 0.969 and 1.591%, respectively, and the Bland-Altman bias and 95% limits of agreement were 1.17% ± 2.61%. MRI motion artifact score was higher in boys than girls (by 0.21, p = 0.021). Higher BMI-Z score was associated with lower agreement between MRS and MRI (p = 0.045). Hepatic PDFF estimation by both MRI and MRS is feasible, and MRI- and MRS-estimated PDFF agree closely in children with known or suspected NAFLD.

  12. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  13. Assessing the influence of return density on estimation of lidar-based aboveground biomass in tropical peat swamp forests of Kalimantan, Indonesia

    NASA Astrophysics Data System (ADS)

    Manuri, Solichin; Andersen, Hans-Erik; McGaughey, Robert J.; Brack, Cris

    2017-04-01

    The airborne lidar system (ALS) provides a means to efficiently monitor the status of remote tropical forests and continues to be the subject of intense evaluation. However, the cost of ALS acquisition can vary significantly depending on the acquisition parameters, particularly the return density (i.e., spatial resolution) of the lidar point cloud. This study assessed the effect of lidar return density on the accuracy of lidar metrics and regression models for estimating aboveground biomass (AGB) and basal area (BA) in tropical peat swamp forests (PSF) in Kalimantan, Indonesia. A large dataset of ALS covering an area of 123,000 ha was used in this study. This study found that cumulative return proportion (CRP) variables represent a better accumulation of AGB over tree heights than height-related variables. The CRP variables in power models explained 80.9% and 90.9% of the BA and AGB variations, respectively. Further, it was found that low-density (and low-cost) lidar should be considered as a feasible option for assessing AGB and BA in vast areas of flat, lowland PSF. The performance of the models generated using reduced return densities as low as 1/9 returns per m2 also yielded strong agreement with the original high-density data. The use model-based statistical inferences enabled relatively precise estimates of the mean AGB at the landscape scale to be obtained with a fairly low-density of 1/4 returns per m2, with less than 10% standard error (SE). Further, even when very low-density lidar data was used (i.e., 1/49 returns per m2) the bias of the mean AGB estimates were still less than 10% with a SE of approximately 15%. This study also investigated the influence of different DTM resolutions for normalizing the elevation during the generation of forest-related lidar metrics using various return densities point cloud. We found that the high-resolution digital terrain model (DTM) had little effect on the accuracy of lidar metrics calculation in PSF. The accuracy of low-density lidar metrics in PSF was more influenced by the density of aboveground returns, rather than the last return. This is due to the flat topography of the study area. The results of this study will be valuable for future economical and feasible assessments of forest metrics over large areas of tropical peat swamp ecosystems.

  14. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  15. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  16. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  17. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  18. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  19. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  20. Note: Real-time monitoring via second-harmonic interferometry of a flow gas cell for laser wakefield acceleration.

    PubMed

    Brandi, F; Giammanco, F; Conti, F; Sylla, F; Lambert, G; Gizzi, L A

    2016-08-01

    The use of a gas cell as a target for laser wakefield acceleration (LWFA) offers the possibility to obtain stable and manageable laser-plasma interaction process, a mandatory condition for practical applications of this emerging technique, especially in multi-stage accelerators. In order to obtain full control of the gas particle number density in the interaction region, thus allowing for a long term stable and manageable LWFA, real-time monitoring is necessary. In fact, the ideal gas law cannot be used to estimate the particle density inside the flow cell based on the preset backing pressure and the room temperature because the gas flow depends on several factors like tubing, regulators, and valves in the gas supply system, as well as vacuum chamber volume and vacuum pump speed/throughput. Here, second-harmonic interferometry is applied to measure the particle number density inside a flow gas cell designed for LWFA. The results demonstrate that real-time monitoring is achieved and that using low backing pressure gas (<1 bar) and different cell orifice diameters (<2 mm) it is possible to finely tune the number density up to the 10(19) cm(-3) range well suited for LWFA.

  1. Note: Real-time monitoring via second-harmonic interferometry of a flow gas cell for laser wakefield acceleration

    NASA Astrophysics Data System (ADS)

    Brandi, F.; Giammanco, F.; Conti, F.; Sylla, F.; Lambert, G.; Gizzi, L. A.

    2016-08-01

    The use of a gas cell as a target for laser wakefield acceleration (LWFA) offers the possibility to obtain stable and manageable laser-plasma interaction process, a mandatory condition for practical applications of this emerging technique, especially in multi-stage accelerators. In order to obtain full control of the gas particle number density in the interaction region, thus allowing for a long term stable and manageable LWFA, real-time monitoring is necessary. In fact, the ideal gas law cannot be used to estimate the particle density inside the flow cell based on the preset backing pressure and the room temperature because the gas flow depends on several factors like tubing, regulators, and valves in the gas supply system, as well as vacuum chamber volume and vacuum pump speed/throughput. Here, second-harmonic interferometry is applied to measure the particle number density inside a flow gas cell designed for LWFA. The results demonstrate that real-time monitoring is achieved and that using low backing pressure gas (<1 bar) and different cell orifice diameters (<2 mm) it is possible to finely tune the number density up to the 1019 cm-3 range well suited for LWFA.

  2. Relationship between field-aligned currents and inverted-V parallel potential drops observed at midaltitudes

    NASA Astrophysics Data System (ADS)

    Sakanoi, T.; Fukunishi, H.; Mukai, T.

    1995-10-01

    The inverted-V field-aligned acceleration region existing in the altitude range of several thousand kilometers plays an essential role for the magnetosphere-ionosphere coupling system. The adiabatic plasma theory predicts a linear relationship between field-aligned current density (J∥) and parallel potential drop (Φ∥), that is, J∥=KΦ∥, where K is the field-aligned conductance. We examined this relationship using the charged particle and magnetic field data obtained from the Akebono (Exos D) satellite. The potential drop above the satellite was derived from the peak energy of downward electrons, while the potential drop below the satellite was derived from two different methods: the peak energy of upward ions and the energy-dependent widening of electron loss cone. On the other hand, field-aligned current densities in the inverted-V region were estimated from the Akebono magnetometer data. Using these potential drops and field-aligned current densities, we estimated the linear field-aligned conductance KJΦ. Further, we obtained the corrected field-aligned conductance KCJΦ by applying the full Knight's formula to the current-voltage relationship. We also independently estimated the field-aligned conductance KTN from the number density and the thermal temperature of magnetospheric source electrons which were obtained by fitting accelerated Maxwellian functions for precipitating electrons. The results are summarized as follows: (1) The latitudinal dependence of parallel potential drops is characterized by a narrow V-shaped structure with a width of 0.4°-1.0°. (2) Although the inverted-V potential region exactly corresponds to the upward field aligned current region, the latitudinal dependence of upward current intensity is an inverted-U shape rather than an inverted-V shape. Thus it is suggested that the field-aligned conductance KCJΦ changes with a V-shaped latitudinal dependence. In many cases, KCJΦ values at the edge of the inverted-V region are about 5-10 times larger than those at the center. (3) By comparing KCJΦ with KTN, KCJΦ is found to be about 2-20 times larger than KTN. These results suggest that low-energy electrons such as trapped electrons, secondary and back-scattered electrons, and ionospheric electrons significantly contribute to upward field-aligned currents in the inverted-V region. It is therefore inferred that non adiabatic pitch angle scattering processes play an important role in the inverted-V region. .

  3. Mapping forest canopy fuels in Yellowstone National Park using lidar and hyperspectral data

    NASA Astrophysics Data System (ADS)

    Halligan, Kerry Quinn

    The severity and size of wildland fires in the forested western U.S have increased in recent years despite improvements in fire suppression efficiency. This, along with increased density of homes in the wildland-urban interface, has resulted in high costs for fire management and increased risks to human health, safety and property. Crown fires, in comparison to surface fires, pose an especially high risk due to their intensity and high rate of spread. Crown fire models require a range of quantitative fuel parameters which can be difficult and costly to obtain, but advances in lidar and hyperspectral sensor technologies hold promise for delivering these inputs. Further research is needed, however, to assess the strengths and limitations of these technologies and the most appropriate analysis methodologies for estimating crown fuel parameters from these data. This dissertation focuses on retrieving critical crown fuel parameters, including canopy height, canopy bulk density and proportion of dead canopy fuel, from airborne lidar and hyperspectral data. Remote sensing data were used in conjunction with detailed field data on forest parameters and surface reflectance measurements. A new method was developed for retrieving Digital Surface Model (DSM) and Digital Canopy Models (DCM) from first return lidar data. Validation data on individual tree heights demonstrated the high accuracy (r2 0.95) of the DCMs developed via this new algorithm. Lidar-derived DCMs were used to estimate critical crown fire parameters including available canopy fuel, canopy height and canopy bulk density with linear regression model r2 values ranging from 0.75 to 0.85. Hyperspectral data were used in conjunction with Spectral Mixture Analysis (SMA) to assess fuel quality in the form of live versus dead canopy proportions. Severity and stage of insect-caused forest mortality were estimated using the fractional abundance of green vegetation, non-photosynthetic vegetation and shade obtained from SMA. Proportion of insect attack was estimated with a linear model producing an r2 of 0.6 using SMA and bark endmembers from image and reference libraries. Fraction of red attack, with a possible link to increased crown fire risk, was estimated with an r2 of 0.45.

  4. Fishery-independent surface abundance and density estimates of swordfish (Xiphias gladius) from aerial surveys in the Central Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Lauriano, Giancarlo; Pierantonio, Nino; Kell, Laurence; Cañadas, Ana; Donovan, Gregory; Panigada, Simone

    2017-07-01

    Fishery-independent surface density and abundance estimates for the swordfish were obtained through aerial surveys carried out over a large portion of the Central Mediterranean, implementing distance sampling methodologies. Both design- and model-based abundance and density showed an uneven occurrence of the species throughout the study area, with clusters of higher density occurring near converging fronts, strong thermoclines and/or underwater features. The surface abundance was estimated for the Pelagos Sanctuary for Mediterranean Marine Mammals in the summer of 2009 (n=1152; 95%CI=669.0-1981.0; %CV=27.64), the Sea of Sardinia, the Pelagos Sanctuary and the Central Tyrrhenian Sea for the summer of 2010 (n=3401; 95%CI=2067.0-5596.0; %CV=25.51), and for the Southern Tyrrhenian Sea during the winter months of 2010-2011 (n=1228; 95%CI=578-2605; %CV=38.59). The Mediterranean swordfish stock deserves special attention in light of the heavy fishing pressures. Furthermore, the unreliability of fishery-related data has, to date, hampered our ability to effectively inform long-term conservation in the Mediterranean Region. Considering that the European countries have committed to protect the resources and all the marine-related economic and social dynamics upon which they depend, the information presented here constitute useful data towards the international legal requirements under the Marine Strategy Framework Directory, the Common Fisheries Policy, the Habitats and Species Directive and the Directive on Maritime Spatial Planning, among the others.

  5. Geometric contribution leading to anomalous estimation of two-dimensional electron gas density in GaN based heterostructures

    NASA Astrophysics Data System (ADS)

    Upadhyay, Bhanu B.; Jha, Jaya; Takhar, Kuldeep; Ganguly, Swaroop; Saha, Dipankar

    2018-05-01

    We have observed that the estimation of two-dimensional electron gas density is dependent on the device geometry. The geometric contribution leads to the anomalous estimation of the GaN based heterostructure properties. The observed discrepancy is found to originate from the anomalous area dependent capacitance of GaN based Schottky diodes, which is an integral part of the high electron mobility transistors. The areal capacitance density is found to increase for smaller radii Schottky diodes, contrary to a constant as expected intuitively. The capacitance is found to follow a second order polynomial on the radius of all the bias voltages and frequencies considered here. In addition to the quadratic dependency corresponding to the areal component, the linear dependency indicates a peripheral component. It is further observed that the peripheral to areal contribution is inversely proportional to the radius confirming the periphery as the location of the additional capacitance. The peripheral component is found to be frequency dependent and tends to saturate to a lower value for measurements at a high frequency. In addition, the peripheral component is found to vanish when the surface is passivated by a combination of N2 and O2 plasma treatments. The cumulative surface state density per unit length of the perimeter of the Schottky diodes as obtained by the integrated response over the distance between the ohmic and Schottky contacts is found to be 2.75 × 1010 cm-1.

  6. Development and validation of a fixed-precision sequential sampling plan for estimating brood adult density of Dendroctonus pseudotsugae (Coleoptera: Scolytidae)

    Treesearch

    Jose F. Negron; Willis C. Schaupp; Erik Johnson

    2000-01-01

    The Douglas-fir beetle, Dendroctonus pseudotsugae Hopkins, attacks Douglas-fir, Pseudotsuga menziesii (Mirb.) Franco (Pinaceae), throughout western North America. Periodic outbreaks cause increased mortality of its host. Land managers and forest health specialists often need to determine population trends of this insect. Bark samples were obtained from 326 trees...

  7. Inventory implications of using sampling variances in estimation of growth model coefficients

    Treesearch

    Albert R. Stage; William R. Wykoff

    2000-01-01

    Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...

  8. An Evaluation of Population Density Mapping and Built up Area Estimates in Sri Lanka Using Multiple Methodologies

    NASA Astrophysics Data System (ADS)

    Engstrom, R.; Soundararajan, V.; Newhouse, D.

    2017-12-01

    In this study we examine how well multiple population density and built up estimates that utilize satellite data compare in Sri Lanka. The population relationship is examined at the Gram Niladhari (GN) level, the lowest administrative unit in Sri Lanka from the 2011 census. For this study we have two spatial domains, the whole country and a 3,500km2 sub-sample, for which we have complete high spatial resolution imagery coverage. For both the entire country and the sub-sample we examine how consistent are the existing publicly available measures of population constructed from satellite imagery at predicting population density? For just the sub-sample we examine how well do a suite of values derived from high spatial resolution satellite imagery predict population density and how does our built up area estimate compare to other publicly available estimates. Population measures were obtained from the Sri Lankan census, and were downloaded from Facebook, WorldPoP, GPW, and Landscan. Percentage built-up area at the GN level was calculated from three sources: Facebook, Global Urban Footprint (GUF), and the Global Human Settlement Layer (GHSL). For the sub-sample we have derived a variety of indicators from the high spatial resolution imagery. Using deep learning convolutional neural networks, an object oriented, and a non-overlapping block, spatial feature approach. Variables calculated include: cars, shadows (a proxy for building height), built up area, and buildings, roof types, roads, type of agriculture, NDVI, Pantex, and Histogram of Oriented Gradients (HOG) and others. Results indicate that population estimates are accurate at the higher, DS Division level but not necessarily at the GN level. Estimates from Facebook correlated well with census population (GN correlation of 0.91) but measures from GPW and WorldPop are more weakly correlated (0.64 and 0.34). Estimates of built-up area appear to be reliable. In the 32 DSD-subsample, Facebook's built- up area measure is highly correlated with our built-up measure (correlation of 0.9). Preliminary regression results based on variables selected from Lasso-regressions indicate that satellite indicators have exceptionally strong predictive power in predicting GN level population level and density with an out of sample r-squared of 0.75 and 0.72 respectively.

  9. Local ionospheric electron density reconstruction from simultaneous ground-based GNSS and ionosonde measurements

    NASA Astrophysics Data System (ADS)

    Stankov, S. M.; Warnant, R.; Stegen, K.

    2009-04-01

    The purpose of the LIEDR (Local Ionospheric Electron Density Reconstruction) system is to acquire and process data from simultaneous ground-based GNSS TEC and digital ionosonde measurements, and subsequently to deduce the vertical electron density distribution in the local ionosphere. LIEDR is primarily designed to operate in real time for service applications, and, if sufficient data from solar and geomagnetic observations are available, to provide short-term forecast as well. For research applications and further development of the system, a post-processing mode of operation is also envisaged. In essence, the reconstruction procedure consists in the following. The high-precision ionosonde measurements are used for directly obtaining the bottom part of the electron density profile. The ionospheric profiler for the lower side (i.e. below the density peak height, hmF2) is based on the Epstein layer functions using the known values of the critical frequencies, foF2 and foE, and the propagation factor, M3000F2. The corresponding bottom-side part of the total electron content is calculated from this profile and is then subtracted from the GPS TEC value in order to obtain the unknown portion of the TEC in the upper side (i.e. above the hmF2). Ionosonde data, together with the simultaneously-measured TEC and empirically obtained O+/H+ ion transition level values, are all required for the determination of the topside electron density scale height. The topside electron density is considered as a sum of the constituent oxygen and hydrogen ion densities with unknown vertical scale heights. The latter are calculated by solving a system of transcendental equations that arise from the incorporation of a suitable ionospheric profiler (Chapman, Epstein, or Exponential) into formulae describing ionospheric conditions (plasma quasi-neutrality, ion transition level). Once the topside scale heights are determined, the construction of the vertical electron density distribution in the entire altitude range is a straightforward process. As a by-product of the described procedure, the value of the ionospheric slab thickness can be easily computed. To be able to provide forecast, additional information about the current solar and geomagnetic activity is needed. For the purpose, observations available in real time -- at the Royal Institute of Meteorology (RMI), the Royal Observatory of Belgium (ROB), and the US National Oceanic and Atmospheric Administration (NOAA) -- are used. Recently, a new hybrid model for estimating and predicting the local magnetic index K has been developed. This hybrid model has the advantage of using both, ground-based (geomagnetic field components) and space-based (solar wind parameters) measurements, which results in more reliable estimates of the level of geomagnetic activity - current and future. The described reconstruction procedure has been tested on actual measurements at the RMI Dourbes Geophysics Centre (coordinates: 50.1N, 4.6E) where a GPS receiver is collocated with a digital ionosonde (code: DB049, type: Lowell DGS 256). Currently, the nominal time resolution between two consecutive reconstructions is set to 15 minutes with a forecast horizon for each reconstruction of up to 60 minutes. Several applications are envisaged. For example, the ionospheric propagation delays can be estimated and corrected much easier if the electron density profile is available at a nearby location on a real-time basis. Also, both the input data and the reconstruction results can be used for validation purposes in ionospheric models, maps, and services. Recent studies suggest that such ionospheric monitoring systems can help research/services related to aircraft navigation, e.g. for development of the ‘ionospheric threat' methodology.

  10. Efficient Density Functional Approximation for Electronic Properties of Conjugated Systems

    NASA Astrophysics Data System (ADS)

    Caldas, Marília J.; Pinheiro, José Maximiano, Jr.; Blum, Volker; Rinke, Patrick

    2014-03-01

    There is on-going discussion about reliable prediction of electronic properties of conjugated oligomers and polymers, such as ionization potential IP and energy gap. Several exchange-correlation (XC) functionals are being used by the density functional theory community, with different success for different properties. In this work we follow a recent proposal: a fraction α of exact exchange is added to the semi-local PBE XC aiming consistency, for a given property, with the results obtained by many-body perturbation theory within the G0W0 approximation. We focus the IP, taken as the negative of the highest occupied molecular orbital energy. We choose α from a study of the prototype family trans-acetylene, and apply this same α to a set of oligomers for which there is experimental data available (acenes, phenylenes and others). Our results indicate we can have excellent estimates, within 0,2eV mean ave. dev. from the experimental values, better than through complete EN - 1 -EN calculations from the starting PBE functional. We also obtain good estimates for the electrical gap and orbital energies close to the band edge. Work supported by FAPESP, CNPq, and CAPES, Brazil, and DAAD, Germany.

  11. Prediction of magnitude of minimum horizontal stress from extended leak-off test conducted by the riser vessel CHIKYU

    NASA Astrophysics Data System (ADS)

    Lin, W.; Masago, H.; Yamamoto, K.; Kawamura, Y.; Saito, S.; Kinoshita, M.

    2007-12-01

    By means of introduction of the drilling vessel 'CHIKYU', riser drilling operations using mud fluid will be carried out in NanTroSEIZE Stage 2 for the first time as an oceanic scientific-drilling. For determining drilling operation parameter such as a mud density, a downhole experiment, leak-off test (LOT) or extended leak-off test (XLOT), is going to be implemented next to casing and cementing at each casing shoe during the drilling process. Data of the downhole experiment aimed for operation can also be used for an important scientific application to obtain in-situ stress information which is necessary for various cases of scientific drillings such as seismogenic zone drillings etc. In order to examine feasibility of the application of the LOT or XLOT data, we analyzed an example of XLOT conducted by the riser vessel CHIKYU during its Shimokita shakedown cruise, 2006; and then estimated magnitude of minimum principal stress in horizontal plane, Shmin. Moreover, we will propose the test procedures to possibly improve the quality of stress result from the applications of LOT or XLOT. The XLOT of Shimokita cruise was conducted under following conditions; 1180 m water depth, 525 mbsf (meter below seafloor) depth, 1030 kg/m3 fluid density (seawater) and 80 litter/min injection flow-rate. Estimated magnitude of the Shmin is equal to 18.3 MPa based on the assumption that fracture closure pressure balances with the minimum principal stress perpendicular to the fracture plane. For comparison, the vertical stress magnitude at the depth was estimated from density profile of core samples retrieved from the same borehole; and was equal to 20 MPa approximately. These two values can be considered to be not disagreement. Therefore, we can say that the XLOT data is valuable and practical for estimating the magnitude of minimum horizontal stress. From the viewpoint of determining stress magnitude, the XLOT is more essential rather than the LOT because it might be hardly to obtain reliable Shmin magnitude only by leak-off pressure which is exclusive stress-related parameter obtained from the latter. In addition, implementation of the LOT/XLOT multi-cycles (3 cycles) is preferable if possible. The first cycle with a lower maximum injection pressure is for knowing permeable property of the formation and for examining whether there is pre-existing fracture(s). The second cycle is a normal XLOT; and the third one is the repeat of the second one for confirm the pressure values obtained from the XLOTs.

  12. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  13. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  14. A bias-corrected estimator in multiple imputation for missing data.

    PubMed

    Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki

    2018-05-29

    Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Stochastic modelling of intermittent fluctuations in the scrape-off layer: Correlations, distributions, level crossings, and moment estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.

    A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less

  16. A Design Study of Onboard Navigation and Guidance During Aerocapture at Mars. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Fuhry, Douglas Paul

    1988-01-01

    The navigation and guidance of a high lift-to-drag ratio sample return vehicle during aerocapture at Mars are investigated. Emphasis is placed on integrated systems design, with guidance algorithm synthesis and analysis based on vehicle state and atmospheric density uncertainty estimates provided by the navigation system. The latter utilizes a Kalman filter for state vector estimation, with useful update information obtained through radar altimeter measurements and density altitude measurements based on IMU-measured drag acceleration. A three-phase guidance algorithm, featuring constant bank numeric predictor/corrector atmospheric capture and exit phases and an extended constant altitude cruise phase, is developed to provide controlled capture and depletion of orbital energy, orbital plane control, and exit apoapsis control. Integrated navigation and guidance systems performance are analyzed using a four degree-of-freedom computer simulation. The simulation environment includes an atmospheric density model with spatially correlated perturbations to provide realistic variations over the vehicle trajectory. Navigation filter initial conditions for the analysis are based on planetary approach optical navigation results. Results from a selection of test cases are presented to give insight into systems performance.

  17. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  18. Reservoir area of influence and implications for fisheries management

    USGS Publications Warehouse

    Martin, Dustin R.; Chizinski, Christopher J.; Pope, Kevin L.

    2015-01-01

    Understanding the spatial area that a reservoir draws anglers from, defined as the reservoir's area of influence, and the potential overlap of that area of influence between reservoirs is important for fishery managers. Our objective was to define the area of influence for reservoirs of the Salt Valley regional fishery in southeastern Nebraska using kernel density estimation. We used angler survey data obtained from in-person interviews at 17 reservoirs during 2009–2012. The area of influence, defined by the 95% kernel density, for reservoirs within the Salt Valley regional fishery varied, indicating that anglers use reservoirs differently across the regional fishery. Areas of influence reveal angler preferences in a regional context, indicating preferred reservoirs with a greater area of influence. Further, differences in areas of influences across time and among reservoirs can be used as an assessment following management changes on an individual reservoir or within a regional fishery. Kernel density estimation provided a clear method for creating spatial maps of areas of influence and provided a two-dimensional view of angler travel, as opposed to the traditional mean travel distance assessment.

  19. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2010-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  20. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope

  1. Detectability in Audio-Visual Surveys of Tropical Rainforest Birds: The Influence of Species, Weather and Habitat Characteristics.

    PubMed

    Anderson, Alexander S; Marques, Tiago A; Shoo, Luke P; Williams, Stephen E

    2015-01-01

    Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species.

  2. Annual incidence of snake bite in rural bangladesh.

    PubMed

    Rahman, Ridwanur; Faiz, M Abul; Selim, Shahjada; Rahman, Bayzidur; Basher, Ariful; Jones, Alison; d'Este, Catherine; Hossain, Moazzem; Islam, Ziaul; Ahmed, Habib; Milton, Abul Hasnat

    2010-10-26

    Snake bite is a neglected public health problem in the world and one of the major causes of mortality and morbidity in many areas, particularly in the rural tropics. It also poses substantial economic burdens on the snake bite victims due to treatment related expenditure and loss of productivity. An accurate estimate of the risk of snake bite is largely unknown for most countries in the developing world, especially South-East Asia. We undertook a national epidemiological survey to determine the annual incidence density of snake bite among the rural Bangladeshi population. Information on frequency of snake bite and individuals' length of stay in selected households over the preceding twelve months was rigorously collected from the respondents through an interviewer administered questionnaire. Point estimates and confidence intervals of the incidence density of snake bite, weighted and adjusted for the multi-stage cluster sampling design, were obtained. Out of 18,857 study participants, over one year a total of 98 snake bites, including one death were reported in rural Bangladesh. The estimated incidence density of snake bite is 623.4/100,000 person years (95% C I 513.4-789.2/100,000 person years). Biting occurs mostly when individuals are at work. The majority of the victims (71%) receive snake bites to their lower extremities. Eighty-six percent of the victims received some form of management within two hours of snake bite, although only three percent of the victims went directly to either a medical doctor or a hospital. Incidence density of snake bite in rural Bangladesh is substantially higher than previously estimated. This is likely due to better ascertainment of the incidence through a population based survey. Poor access to health services increases snake bite related morbidity and mortality; therefore, effective public health actions are warranted.

  3. Survey of Hylobates agilis albibarbis in a logged peat-swamp forest: Sabangau catchment, Central Kalimantan.

    PubMed

    Buckley, Cara; Nekaris, K A I; Husson, Simon John

    2006-10-01

    Few data are available on gibbon populations in peat-swamp forest. In order to assess the importance of this habitat for gibbon conservation, a population of Hylobates agilis albibarbis was surveyed in the Sabangau peat-swamp forest, Central Kalimantan, Indonesia. This is an area of about 5,500 km(2) of selectively logged peat-swamp forest, which was formally gazetted as a national park during 2005. The study was conducted during June and July 2004 using auditory sampling methods. Five sample areas were selected and each was surveyed for four consecutive days by three teams of researchers at designated listening posts. Researchers recorded compass bearings of, and estimated distances to, singing groups. Nineteen groups were located. Population density is estimated to be 2.16 (+/-0.46) groups/km(2). Sightings occurring either at the listening posts or that were obtained by tracking in on calling groups yielded a mean group size of 3.4 individuals, hence individual gibbon density is estimated to be 7.4 (+/-1.59) individuals/km(2). The density estimates fall at the mid-range of those calculated for other gibbon populations, thus suggesting that peat-swamp forest is an important habitat for gibbon conservation in Borneo. A tentative extrapolation of results suggests a potential gibbon population size of 19,000 individuals within the mixed-swamp forest habitat sub-type in the Sabangau. This represents one of the largest remaining continuous populations of Bornean agile gibbons. The designation of the Sabangau forest as a national park will hopefully address the problem of illegal logging and hunting in the region. Further studies should note any difference in gibbon density post protection.

  4. Detectability in Audio-Visual Surveys of Tropical Rainforest Birds: The Influence of Species, Weather and Habitat Characteristics

    PubMed Central

    Anderson, Alexander S.; Marques, Tiago A.; Shoo, Luke P.; Williams, Stephen E.

    2015-01-01

    Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species. PMID:26110433

  5. Demonstration of line transect methodologies to estimate urban gray squirrel density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hein, E.W.

    1997-11-01

    Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimatingmore » urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.« less

  6. Detection limit for rate fluctuations in inhomogeneous Poisson processes

    NASA Astrophysics Data System (ADS)

    Shintani, Toshiaki; Shinomoto, Shigeru

    2012-04-01

    Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.

  7. Detection limit for rate fluctuations in inhomogeneous Poisson processes.

    PubMed

    Shintani, Toshiaki; Shinomoto, Shigeru

    2012-04-01

    Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.

  8. Estimation of critical behavior from the density of states in classical statistical models

    NASA Astrophysics Data System (ADS)

    Malakis, A.; Peratzakis, A.; Fytas, N. G.

    2004-12-01

    We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.

  9. Measuring bulrush culm relationships to estimate plant biomass within a southern California treatment wetland

    USGS Publications Warehouse

    Daniels, Joan S. (Thullen); Cade, Brian S.; Sartoris, James J.

    2010-01-01

    Assessment of emergent vegetation biomass can be time consuming and labor intensive. To establish a less onerous, yet accurate method, for determining emergent plant biomass than by direct measurements we collected vegetation data over a six-year period and modeled biomass using easily obtained variables: culm (stem) diameter, culm height and culm density. From 1998 through 2005, we collected emergent vegetation samples (Schoenoplectus californicus andSchoenoplectus acutus) at a constructed treatment wetland in San Jacinto, California during spring and fall. Various statistical models were run on the data to determine the strongest relationships. We found that the nonlinear relationship: CB=β0DHβ110ε, where CB was dry culm biomass (g m−2), DH was density of culms × average height of culms in a plot, and β0 and β1 were parameters to estimate, proved to be the best fit for predicting dried-live above-ground biomass of the two Schoenoplectus species. The random error distribution, ε, was either assumed to be normally distributed for mean regression estimates or assumed to be an unspecified continuous distribution for quantile regression estimates.

  10. A method for vibrational assessment of cortical bone

    NASA Astrophysics Data System (ADS)

    Song, Yan; Gunaratne, Gemunu H.

    2006-09-01

    Large bones from many anatomical locations of the human skeleton consist of an outer shaft (cortex) surrounding a highly porous internal region (trabecular bone) whose structure is reminiscent of a disordered cubic network. Age related degradation of cortical and trabecular bone takes different forms. Trabecular bone weakens primarily by loss of connectivity of the porous network, and recent studies have shown that vibrational response can be used to obtain reliable estimates for loss of its strength. In contrast, cortical bone degrades via the accumulation of long fractures and changes in the level of mineralization of the bone tissue. In this paper, we model cortical bone by an initially solid specimen with uniform density to which long fractures are introduced; we find that, as in the case of trabecular bone, vibrational assessment provides more reliable estimates of residual strength in cortical bone than is possible using measurements of density or porosity.

  11. The instantaneous frequency rate spectrogram

    NASA Astrophysics Data System (ADS)

    Czarnecki, Krzysztof

    2016-01-01

    An accelerogram of the instantaneous phase of signal components referred to as an instantaneous frequency rate spectrogram (IFRS) is presented as a joint time-frequency distribution. The distribution is directly obtained by processing the short-time Fourier transform (STFT) locally. A novel approach to amplitude demodulation based upon the reassignment method is introduced as a useful by-product. Additionally, an estimator of energy density versus the instantaneous frequency rate (IFR) is proposed and referred to as the IFR profile. The energy density is estimated based upon both the classical energy spectrogram and the IFRS smoothened by the median filter. Moreover, the impact of an analyzing window width, additive white Gaussian noise and observation time is tested. Finally, the introduced method is used for the analysis of the acoustic emission of an automotive engine. The recording of the engine of a Lamborghini Gallardo is analyzed as an example.

  12. Temperature effects in contacts between a metal and a semiconductor nanowire near the degenerate doping

    NASA Astrophysics Data System (ADS)

    Sun, Zhuting; Burgess, Tim; Tan, H. H.; Jagadish, Chennupati; Kogan, Andrei

    2018-04-01

    We have investigated the nonlinear conductance in diffusion-doped Si:GaAs nanowires contacted by patterned metal films in a wide range of temperatures T. The wire resistance R W and the zero bias resistance R C, dominated by the contacts, exhibit very different responses to temperature changes. While R W shows almost no dependence on T, R C varies by several orders of magnitude as the devices are cooled from room temperature to T = 5 K. We develop a model that employs a sharp donor level very low in the GaAs conduction band and show that our observations are consistent with the model predictions. We then demonstrate that such measurements can be used to estimate carrier properties in nanostructured semiconductors and obtain an estimate for N D, the doping density in our samples. We also discuss the effects of surface states and dielectric confinement on carrier density in semiconductor nanowires.

  13. Atmospheric Modeling Using Accelerometer Data During Mars Atmosphere and Volatile Evolution (MAVEN) Flight Operations

    NASA Technical Reports Server (NTRS)

    Tolson, Robert H.; Lugo, Rafael A.; Baird, Darren T.; Cianciolo, Alicia D.; Bougher, Stephen W.; Zurek, Richard M.

    2017-01-01

    The Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft is a NASA orbiter designed to explore the Mars upper atmosphere, typically from 140 to 160 km altitude. In addition to the nominal science mission, MAVEN has performed several Deep Dip campaigns in which the orbit's closest point of approach, also called periapsis, was lowered to an altitude range of 115 to 135 km. MAVEN accelerometer data were used during mission operations to estimate atmospheric parameters such as density, scale height, along-track gradients, and wave structures. Density and scale height estimates were compared against those obtained from the Mars Global Reference Atmospheric Model and used to aid the MAVEN navigation team in planning maneuvers to raise and lower periapsis during Deep Dip operations. This paper describes the processes used to reconstruct atmosphere parameters from accelerometers data and presents the results of their comparison to model and navigation-derived values.

  14. Ionospheric responses during equinox and solstice periods over Turkey

    NASA Astrophysics Data System (ADS)

    Karatay, Secil; Cinar, Ali; Arikan, Feza

    2017-11-01

    Ionospheric electron density is the determining variable for investigation of the spatial and temporal variations in the ionosphere. Total Electron Content (TEC) is the integral of the electron density along a ray path that indicates the total variability through the ionosphere. Global Positioning System (GPS) recordings can be utilized to estimate the TEC, thus GPS proves itself as a useful tool in monitoring the total variability of electron distribution within the ionosphere. This study focuses on the analysis of the variations of ionosphere over Turkey that can be grouped into anomalies during equinox and solstice periods using TEC estimates obtained by a regional GPS network. It is observed that noon time depletions in TEC distributions predominantly occur in winter for minimum Sun Spots Numbers (SSN) in the central regions of Turkey which also exhibit high variability due to midlatitude winter anomaly. TEC values and ionospheric variations at solstice periods demonstrate significant enhancements compared to those at equinox periods.

  15. Estimating the Occupational Morbidity for Migrant and Seasonal Farmworkers in New York State: a Comparison of Two Methods

    PubMed Central

    Earle-Richardson, Giulia B.; Brower, Melissa A.; Jones, Amanda M.; May, John J.; Jenkins, Paul L.

    2008-01-01

    Purpose To compare occupational morbidity estimates for migrant and seasonal farmworkers obtained from survey methods versus chart review methods, and to estimate the proportion of morbidity treated at federally recognized migrant health centers (MHCs) in a highly agricultural region of New York. Methods Researchers simultaneously conducted: a) an occupational injury and illness survey among agricultural workers; b) MHC chart review; and c) hospital emergency room (ER) chart reviews. Results Of the 24 injuries reported by 550 survey subjects, 54.2% received treatment MHCs 16.7% at ERs, 16.7% at some other facility, and 12.5% were untreated. For injuries treated at MHCs or ERs, the incidence density based on survey methods was 29.3 injuries per 10,000 worker-weeks versus 27.4 by chart review. The standardized morbidity ratio (SMR) for this comparison was 1.07 (95% CI = 0.65 – 1.77). Conclusion Survey data indicate that 71% of agricultural injury and illness can be captured with MHC and ER chart review. MHC and ER incidence density estimates show strong correspondence between the two methods. A chart review-based surveillance system, in conjunction with a correction factor based on periodic worker surveys, would provide a cost-effective estimate of the occupational illness and injury rate in this population. PMID:18063238

  16. An Investigation of the Mechanical Properties of Some Martian Regolith Simulants with Respect to the Surface Properties at the InSight Mission Landing Site

    NASA Astrophysics Data System (ADS)

    Delage, Pierre; Karakostas, Foivos; Dhemaied, Amine; Belmokhtar, Malik; Lognonné, Philippe; Golombek, Matt; De Laure, Emmanuel; Hurst, Ken; Dupla, Jean-Claude; Kedar, Sharon; Cui, Yu Jun; Banerdt, Bruce

    2017-10-01

    In support of the InSight mission in which two instruments (the SEIS seismometer and the HP3 heat flow probe) will interact directly with the regolith on the surface of Mars, a series of mechanical tests were conducted on three different regolith simulants to better understand the observations of the physical and mechanical parameters that will be derived from InSight. The mechanical data obtained were also compared to data on terrestrial sands. The density of the regolith strongly influences its mechanical properties, as determined from the data on terrestrial sands. The elastoplastic compression volume changes were investigated through oedometer tests that also provided estimates of possible changes in density with depth. The results of direct shear tests provided values of friction angles that were compared with that of a terrestrial sand, and an extrapolation to lower density provided a friction angle compatible with that estimated from previous observations on the surface of Mars. The importance of the contracting/dilating shear volume changes of sands on the dynamic penetration of the mole was determined, with penetration facilitated by the ˜1.3 Mg/m3 density estimated at the landing site. Seismic velocities, measured by means of piezoelectric bender elements in triaxial specimens submitted to various isotropic confining stresses, show the importance of the confining stress, with lesser influence of density changes under compression. A power law relation of velocity as a function of confining stress with an exponent of 0.3 was identified from the tests, allowing an estimate of the surface seismic velocity of 150 m/s. The effect on the seismic velocity of a 10% proportion of rock in the regolith was also studied. These data will be compared with in situ data measured by InSight after landing.

  17. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.

  18. Regional model-based computerized ionospheric tomography using GPS measurements: IONOLAB-CIT

    NASA Astrophysics Data System (ADS)

    Tuna, Hakan; Arikan, Orhan; Arikan, Feza

    2015-10-01

    Three-dimensional imaging of the electron density distribution in the ionosphere is a crucial task for investigating the ionospheric effects. Dual-frequency Global Positioning System (GPS) satellite signals can be used to estimate the slant total electron content (STEC) along the propagation path between a GPS satellite and ground-based receiver station. However, the estimated GPS-STEC is very sparse and highly nonuniformly distributed for obtaining reliable 3-D electron density distributions derived from the measurements alone. Standard tomographic reconstruction techniques are not accurate or reliable enough to represent the full complexity of variable ionosphere. On the other hand, model-based electron density distributions are produced according to the general trends of ionosphere, and these distributions do not agree with measurements, especially for geomagnetically active hours. In this study, a regional 3-D electron density distribution reconstruction method, namely, IONOLAB-CIT, is proposed to assimilate GPS-STEC into physical ionospheric models. The proposed method is based on an iterative optimization framework that tracks the deviations from the ionospheric model in terms of F2 layer critical frequency and maximum ionization height resulting from the comparison of International Reference Ionosphere extended to Plasmasphere (IRI-Plas) model-generated STEC and GPS-STEC. The suggested tomography algorithm is applied successfully for the reconstruction of electron density profiles over Turkey, during quiet and disturbed hours of ionosphere using Turkish National Permanent GPS Network.

  19. Novel Approach to Evaluation of Charging on Semiconductor Surface by Noncontact, Electrode-Free Capacitance/Voltage Measurement

    NASA Astrophysics Data System (ADS)

    Hirae, Sadao; Kohno, Motohiro; Okada, Hiroshi; Matsubara, Hideaki; Nakatani, Ikuyoshi; Kusuda, Tatsufumi; Sakai, Takamasa

    1994-04-01

    This paper describes a novel approach to the quantitative characterization of semiconductor surface charging caused by plasma exposures and ion implantations. The problems in conventional evaluation of charging are also discussed. Following the discussions above, the necessity of unified criteria is suggested for efficient development of systems or processes without charging damage. Hence, the charging saturation voltage between a top oxide surface and substrate, V s, and the charging density per unit area per second, ρ0, should be taken as criteria of charging behavior, which effectively represent the charging characteristics of both processes. The unified criteria can be obtained from the exposure time dependence of a net charging density on the thick field oxide. In order to determine V s and ρ0, the analysis using the C-V curve measured in a noncontact method with the metal-air-insulator-semiconductor (MAIS) technique is employed. The total space-charge density in oxide and its centroid can be determined at the same time by analyzing the flat-band voltage (V fb) of the MAIS capacitor as a function of the air gap. The net charge density can be obtained by analyzing the difference between the total space-charge density in oxide before and after charging. Finally, it is shown that charge damage of the large area metal-oxide-semiconductor (MOS) capacitor can be estimated from both V s and ρ0 which are obtained from results for a thick field oxide implanted with As+ and exposed to oxygen plasma.

  20. Multiparametric evaluation of hindlimb ischemia using time-series indocyanine green fluorescence imaging.

    PubMed

    Guang, Huizhi; Cai, Chuangjian; Zuo, Simin; Cai, Wenjuan; Zhang, Jiulou; Luo, Jianwen

    2017-03-01

    Peripheral arterial disease (PAD) can further cause lower limb ischemia. Quantitative evaluation of the vascular perfusion in the ischemic limb contributes to diagnosis of PAD and preclinical development of new drug. In vivo time-series indocyanine green (ICG) fluorescence imaging can noninvasively monitor blood flow and has a deep tissue penetration. The perfusion rate estimated from the time-series ICG images is not enough for the evaluation of hindlimb ischemia. The information relevant to the vascular density is also important, because angiogenesis is an essential mechanism for post-ischemic recovery. In this paper, a multiparametric evaluation method is proposed for simultaneous estimation of multiple vascular perfusion parameters, including not only the perfusion rate but also the vascular perfusion density and the time-varying ICG concentration in veins. The target method is based on a mathematical model of ICG pharmacokinetics in the mouse hindlimb. The regression analysis performed on the time-series ICG images obtained from a dynamic reflectance fluorescence imaging system. The results demonstrate that the estimated multiple parameters are effective to quantitatively evaluate the vascular perfusion and distinguish hypo-perfused tissues from well-perfused tissues in the mouse hindlimb. The proposed multiparametric evaluation method could be useful for PAD diagnosis. The estimated perfusion rate and vascular perfusion density maps (left) and the time-varying ICG concentration in veins of the ankle region (right) of the normal and ischemic hindlimbs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.

  2. Vocalization behavior and response of black rails

    USGS Publications Warehouse

    Legare, M.L.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis) (n = 43, 26 males, 17 females) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used regression coefficients from logistic regression equations to model the probability of a response conditional to the birds' sex, nesting status, distance to playback source, and the time of survey. With a probability of 0.811, non-nesting male black rails were most likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). Linear regression was used to determine daily, monthly, and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the linear regression model were month (F = 3.89, df = 3, p = 0.0140), year (F = 9.37, df = 2, p = 0.0003), temperature (F = 5.44, df=1, p = 0.0236), and month*year (F = 2.69, df = 5, p = 0.0311). The model was highly significant (p < 0.0001) and explained 53% of the variation of mean response per survey period (R2 = 0.5353). Response probability data obtained from the radio-tagged black rails and data from the weekly playback survey route were combined to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. Density estimates for black rails may be obtained from playback surveys, and fixed radius circular plots. Circular plots should be considered as having a radius of 80 m and be located so the plot centers are 150 m apart. Playback tapes should contain one series of Kic-kic-kerr and Growl vocalizations recorded within the same geographic region as the study area. Surveys should be conducted from 0-2 hours after sunrise or 0-2 hours before sunset, during the pre-nesting season, and when wind velocity is < 20 kph. Observers should listen for 3-4 minutes after playing the survey tape and record responses heard during that time. Observers should be trained to identify black rail vocalizations and should have acceptable hearing ability. Given the number of variables that may have large effects on the response behavior of black rails to tape playback, we recommend that future studies using playback surveys should be cautious when presenting estimates of 'absolute' density. Though results did account for variation in response behavior, we believe that additional variation in vocal response between sites, with breeding status, and bird density remains in question. Playback surveys along fixed routes providing a simple index of abundance would be useful to monitor populations over large geographic areas, and over time. Considering the limitations of most agency resources for webless waterbirds, index surveys may be more appropriate. Future telemetry studies of this type on other species and at other sites would be useful to calibrate information obtained from playback surveys whether reporting an index of abundance or density estimate.

  3. Inferences of Strength of Soil Deposits along MER Rover Traverses

    NASA Astrophysics Data System (ADS)

    Richter, L.; Schmitz, N.; Weiss, S.; Mer/Athena Team

    As the two MER Mars Exploration Rovers ,Spirit' and ,Opportunity' traverse terrains within Gusev crater and at Meridiani Planum, respectively, they leave behind wheel tracks that are routinely imaged by the different sets of cameras as part of the Athena instrument suite. Stereo observations of these tracks reveal wheel sinkage depths which are diagnostic of the strength of the soil-like deposits crossed by the vehicles, and observations of track morphology at different imaging scales - including that of the Microscopic Imager - allow estimations of soil grain size distributions. This presentation will discuss results of systematic analyses of MER-A and -B wheel track observations with regard to solutions for soil bearing strength and soil shear strength. Data are analyzed in the context of wheel-soil theory calibrated to the shape of the MER wheel and by consulting comparisons with terrestrial soils. Results are applicable to the top ˜20-30 cm of the soil deposits, the depth primarily affected by the stress distribution under the wheels. The large number of wheel track observations per distance travelled enables investigations of variations of soil physical properties as a function of spatial scale, type of surface feature encountered, and local topography. Exploiting relationships between soil strength and degree of soil consolidation known from lunar regolith and dry terrestrial soils allows one to relate inferred soil strengths to bulk density. This provides a means to ground-truth radar Fresnel reflection coefficients obtained for the landing sites from Earth-based observations. Moreover, bulk density is correlated with soil dielectric constant, a parameter of direct relevance also for Mars-orbiting radars. The obtained estimates for soil bulk density are also used to determine local thermal conductivity of near-surface materials, based on correlations between the two quantities, and to subsequently estimate thermal inertia. This represents an independent method to provide ground truth to thermal inertia determined from orbital thermal measurements of the MER landing sites (MGS TES, MODY THEMIS, MEX PFS & OMEGA), in addition to thermal inertia retrievals from the Athena Mini-TES instrument. Key results suggest different types of soils as judged from their strength, with most materials encountered being similar in consistency to terrestrial sandy loams. Relatively looser soils have been identified on the slopes of crater walls and in local 1 soil patches of smooth appearance, being interpreted as deposits of unconsolidated dust-like soils. Bulk densities for the different soils vary between ˜1100 and ˜1500 kgm-3 . Results of chemical measurements are currently being exploited to relate soil strength to inferred enrichments in salts possibly acting as cementing agents. Thermal inertias of the soil component obtained from the bulk density estimates range between ˜130 and ˜150 Jm-2 s-1/2 K-1 for the MER-A Gusev site and between ˜130 and ˜140 Jm-2 s-1/2 K-1 for the MER-B Meridiani site. 2

  4. Relationship between symbiont density and photosynthetic carbon acquisition in the temperate coral Cladocora caespitosa

    NASA Astrophysics Data System (ADS)

    Hoogenboom, M.; Beraud, E.; Ferrier-Pagès, C.

    2010-03-01

    This study quantified variation in net photosynthetic carbon gain in response to natural fluctuations in symbiont density for the Mediterranean coral Cladocora caespitosa, and evaluated which density maximized photosynthetic carbon acquisition. To do this, carbon acquisition was modeled as an explicit function of symbiont density. The model was parameterized using measurements of rates of photosynthesis and respiration for small colonies with a broad range of zooxanthella concentrations. Results demonstrate that rates of net photosynthesis increase asymptotically with symbiont density, whereas rates of respiration increase linearly. In combination, these functional responses meant that colony energy acquisition decreased at both low and at very high zooxanthella densities. However, there was a wide range of symbiont densities for which net daily photosynthesis was approximately equivalent. Therefore, significant changes in symbiont density do not necessarily cause a change in autotrophic energy acquisition by the colony. Model estimates of the optimal range of cell densities corresponded well with independent observations of symbiont concentrations obtained from field and laboratory studies of healthy colonies. Overall, this study demonstrates that the seasonal fluctuations, in symbiont numbers observed in healthy colonies of the Mediterranean coral investigated, do not have a strong effect on photosynthetic energy acquisition.

  5. Long-term study of longitudinal dependence in primary particle precipitation in the north Jovian aurora

    NASA Technical Reports Server (NTRS)

    Livengood, T. A.; Strobel, D. F.; Moos, H. W.

    1990-01-01

    The wavelength-dependent absorption apparent in IUE spectra of the north Jovian aurora is analyzed to determine the column density of hydrocarbons above the altitude of the FUV auroral emission. Both the magnetotail and torus auroral zone models are considered in estimating zenith angles, with very similar results obtained for both models. It is found that the hydrocarbon column density above the FUV emission displays a consistent dependence on magnetic longitude, with the peak density occurring approximately coincident with the peak in the observed auroral intensity. Two distinct scenarios for the longitude dependence of the column density are discussed. In one, the Jovian upper atmosphere is longitudinally homogeneous, and the variation in optical depth is due to a variation in penetration, and thus energy, of the primary particles. In the other, the energy of the primaries is longitudinally homogeneous, and it is aeronomic properties which change, probably due to auroral heating.

  6. Using satellite remote sensing to model and map the distribution of Bicknell's thrush (Catharus bicknelli) in the White Mountains of New Hampshire

    NASA Astrophysics Data System (ADS)

    Hale, Stephen Roy

    Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.

  7. Comparison of cumulant expansion and q-space imaging estimates for diffusional kurtosis in brain.

    PubMed

    Mohanty, Vaibhav; McKinnon, Emilie T; Helpern, Joseph A; Jensen, Jens H

    2018-05-01

    To compare estimates for the diffusional kurtosis in brain as obtained from a cumulant expansion (CE) of the diffusion MRI (dMRI) signal and from q-space (QS) imaging. For the CE estimates of the kurtosis, the CE was truncated to quadratic order in the b-value and fit to the dMRI signal for b-values from 0 up to 2000s/mm 2 . For the QS estimates, b-values ranging from 0 up to 10,000s/mm 2 were used to determine the diffusion displacement probability density function (dPDF) via Stejskal's formula. The kurtosis was then calculated directly from the second and fourth order moments of the dPDF. These two approximations were studied for in vivo human data obtained on a 3T MRI scanner using three orthogonal diffusion encoding directions. The whole brain mean values for the CE and QS kurtosis estimates differed by 16% or less in each of the considered diffusion encoding directions, and the Pearson correlation coefficients all exceeded 0.85. Nonetheless, there were large discrepancies in many voxels, particularly those with either very high or very low kurtoses relative to the mean values. Estimates of the diffusional kurtosis in brain obtained using CE and QS approximations are strongly correlated, suggesting that they encode similar information. However, for the choice of b-values employed here, there may be substantial differences, depending on the properties of the diffusion microenvironment in each voxel. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Leaf-on canopy closure in broadleaf deciduous forests predicted during winter

    USGS Publications Warehouse

    Twedt, Daniel J.; Ayala, Andrea J.; Shickel, Madeline R.

    2015-01-01

    Forest canopy influences light transmittance, which in turn affects tree regeneration and survival, thereby having an impact on forest composition and habitat conditions for wildlife. Because leaf area is the primary impediment to light penetration, quantitative estimates of canopy closure are normally made during summer. Studies of forest structure and wildlife habitat that occur during winter, when deciduous trees have shed their leaves, may inaccurately estimate canopy closure. We estimated percent canopy closure during both summer (leaf-on) and winter (leaf-off) in broadleaf deciduous forests in Mississippi and Louisiana using gap light analysis of hemispherical photographs that were obtained during repeat visits to the same locations within bottomland and mesic upland hardwood forests and hardwood plantation forests. We used mixed-model linear regression to predict leaf-on canopy closure from measurements of leaf-off canopy closure, basal area, stem density, and tree height. Competing predictive models all included leaf-off canopy closure (relative importance = 0.93), whereas basal area and stem density, more traditional predictors of canopy closure, had relative model importance of ≤ 0.51.

  9. Using spatial capture–recapture to elucidate population processes and space-use in herpetological studies

    USGS Publications Warehouse

    Muñoz, David J.; Miller, David A.W.; Sutherland, Chris; Grant, Evan H. Campbell

    2016-01-01

    The cryptic behavior and ecology of herpetofauna make estimating the impacts of environmental change on demography difficult; yet, the ability to measure demographic relationships is essential for elucidating mechanisms leading to the population declines reported for herpetofauna worldwide. Recently developed spatial capture–recapture (SCR) methods are well suited to standard herpetofauna monitoring approaches. Individually identifying animals and their locations allows accurate estimates of population densities and survival. Spatial capture–recapture methods also allow estimation of parameters describing space-use and movement, which generally are expensive or difficult to obtain using other methods. In this paper, we discuss the basic components of SCR models, the available software for conducting analyses, and the experimental designs based on common herpetological survey methods. We then apply SCR models to Red-backed Salamander (Plethodon cinereus), to determine differences in density, survival, dispersal, and space-use between adult male and female salamanders. By highlighting the capabilities of SCR, and its advantages compared to traditional methods, we hope to give herpetologists the resource they need to apply SCR in their own systems.

  10. Electrostatic Estimation of Intercalant Jump-Diffusion Barriers Using Finite-Size Ion Models.

    PubMed

    Zimmermann, Nils E R; Hannah, Daniel C; Rong, Ziqin; Liu, Miao; Ceder, Gerbrand; Haranczyk, Maciej; Persson, Kristin A

    2018-02-01

    We report on a scheme for estimating intercalant jump-diffusion barriers that are typically obtained from demanding density functional theory-nudged elastic band calculations. The key idea is to relax a chain of states in the field of the electrostatic potential that is averaged over a spherical volume using different finite-size ion models. For magnesium migrating in typical intercalation materials such as transition-metal oxides, we find that the optimal model is a relatively large shell. This data-driven result parallels typical assumptions made in models based on Onsager's reaction field theory to quantitatively estimate electrostatic solvent effects. Because of its efficiency, our potential of electrostatics-finite ion size (PfEFIS) barrier estimation scheme will enable rapid identification of materials with good ionic mobility.

  11. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  12. Symmetry Energy and Its Components in Finite Nuclei

    NASA Astrophysics Data System (ADS)

    Antonov, A. N.; Gaidarov, M. K.; Kadrev, D. N.; Sarriguren, P.; Moya de Guerra, E.

    2018-05-01

    We derive the volume and surface components of the nuclear symmetry energy (NSE) and their ratio within the coherent density fluctuation model. The estimations use the results of the model for the NSE in finite nuclei based on the Brueckner and Skyrme energy-density functionals for nuclear matter. The obtained values of the volume and surface contributions to the NSE and their ratio for the Ni, Sn, and Pb isotopic chains are compared with estimations of other approaches which have used available experimental data on binding energies, neutron-skin thicknesses, and excitation energies to isobaric analog states (IAS). Apart from the density dependence investigated in our previous works, we study also the temperature dependence of the symmetry energy in finite nuclei in the framework of the local density approximation combining it with the self-consistent Skyrme-HFB method using the cylindrical transformed deformed harmonic-oscillator basis. The results for the thermal evolution of the NSE in the interval T = 0–4 MeV show that its values decrease with temperature. The investigations of the T-dependence of the neutron and proton root-mean-square radii and the corresponding neutron skin thickness point out that the effect of temperature leads mainly to a substantial increase of the neutron radii and skins, especially in nuclei which are more rich of neutrons.

  13. Electric field measurement in the dielectric tube of helium atmospheric pressure plasma jet

    NASA Astrophysics Data System (ADS)

    Sretenović, Goran B.; Guaitella, Olivier; Sobota, Ana; Krstić, Ivan B.; Kovačević, Vesna V.; Obradović, Bratislav M.; Kuraica, Milorad M.

    2017-03-01

    The results of the electric field measurements in the capillary of the helium plasma jet are presented in this article. Distributions of the electric field for the streamers are determined for different gas flow rates. It is found that electric field strength in front of the ionization wave decreases as it approaches to the exit of the tube. The values obtained under presented experimental conditions are in the range of 5-11 kV/cm. It was found that the increase in gas flow above 1500 SCCM could induce substantial changes in the discharge operation. This is reflected through the formation of the brighter discharge region and appearance of the electric field maxima. Furthermore, using the measured values of the electric field strength in the streamer head, it was possible to estimate electron densities in the streamer channel. Maximal density of 4 × 1011 cm-3 is obtained in the vicinity of the grounded ring electrode. Similar behaviors of the electron density distributions to the distributions of the electric field strength are found under the studied experimental conditions.

  14. Intrinsic physical conditions and structure of relativistic jets in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Nokhrina, E. E.; Beskin, V. S.; Kovalev, Y. Y.; Zheltoukhov, A. A.

    2015-03-01

    The analysis of the frequency dependence of the observed shift of the cores of relativistic jets in active galactic nuclei (AGNs) allows us to evaluate the number density of the outflowing plasma ne and, hence, the multiplicity parameter λ = ne/nGJ, where nGJ is the Goldreich-Julian number density. We have obtained the median value for λmed = 3 × 1013 and the median value for the Michel magnetization parameter σM, med = 8 from an analysis of 97 sources. Since the magnetization parameter can be interpreted as the maximum possible Lorentz factor Γ of the bulk motion which can be obtained for relativistic magnetohydrodynamic (MHD) flow, this estimate is in agreement with the observed superluminal motion of bright features in AGN jets. Moreover, knowing these key parameters, one can determine the transverse structure of the flow. We show that the poloidal magnetic field and particle number density are much larger in the centre of the jet than near the jet boundary. The MHD model can also explain the typical observed level of jet acceleration. Finally, casual connectivity of strongly collimated jets is discussed.

  15. Automated Breast Density Computation in Digital Mammography and Digital Breast Tomosynthesis: Influence on Mean Glandular Dose and BIRADS Density Categorization.

    PubMed

    Castillo-García, Maria; Chevalier, Margarita; Garayoa, Julia; Rodriguez-Ruiz, Alejandro; García-Pinto, Diego; Valverde, Julio

    2017-07-01

    The study aimed to compare the breast density estimates from two algorithms on full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT) and to analyze the clinical implications. We selected 561 FFDM and DBT examinations from patients without breast pathologies. Two versions of a commercial software (Quantra 2D and Quantra 3D) calculated the volumetric breast density automatically in FFDM and DBT, respectively. Other parameters such as area breast density and total breast volume were evaluated. We compared the results from both algorithms using the Mann-Whitney U non-parametric test and the Spearman's rank coefficient for data correlation analysis. Mean glandular dose (MGD) was calculated following the methodology proposed by Dance et al. Measurements with both algorithms are well correlated (r ≥ 0.77). However, there are statistically significant differences between the medians (P < 0.05) of most parameters. The volumetric and area breast density median values from FFDM are, respectively, 8% and 77% higher than DBT estimations. Both algorithms classify 35% and 55% of breasts into BIRADS (Breast Imaging-Reporting and Data System) b and c categories, respectively. There are no significant differences between the MGD calculated using the breast density from each algorithm. DBT delivers higher MGD than FFDM, with a lower difference (5%) for breasts in the BIRADS d category. MGD is, on average, 6% higher than values obtained with the breast glandularity proposed by Dance et al. Breast density measurements from both algorithms lead to equivalent BIRADS classification and MGD values, hence showing no difference in clinical outcomes. The median MGD values of FFDM and DBT examinations are similar for dense breasts (BIRADS d category). Published by Elsevier Inc.

  16. Northern elephant seals adjust gliding and stroking patterns with changes in buoyancy: validation of at-sea metrics of body density.

    PubMed

    Aoki, Kagari; Watanabe, Yuuki Y; Crocker, Daniel E; Robinson, Patrick W; Biuw, Martin; Costa, Daniel P; Miyazaki, Nobuyuki; Fedak, Mike A; Miller, Patrick J O

    2011-09-01

    Many diving animals undergo substantial changes in their body density that are the result of changes in lipid content over their annual fasting cycle. Because the size of the lipid stores reflects an integration of foraging effort (energy expenditure) and foraging success (energy assimilation), measuring body density is a good way to track net resource acquisition of free-ranging animals while at sea. Here, we experimentally altered the body density and mass of three free-ranging elephant seals by remotely detaching weights and floats while monitoring their swimming speed, depth and three-axis acceleration with a high-resolution data logger. Cross-validation of three methods for estimating body density from hydrodynamic gliding performance of freely diving animals showed strong positive correlation with body density estimates obtained from isotope dilution body composition analysis over density ranges of 1015 to 1060 kg m(-3). All three hydrodynamic models were within 1% of, but slightly greater than, body density measurements determined by isotope dilution, and therefore have the potential to track changes in body condition of a wide range of freely diving animals. Gliding during ascent and descent clearly increased and stroke rate decreased when buoyancy manipulations aided the direction of vertical transit, but ascent and descent speed were largely unchanged. The seals adjusted stroking intensity to maintain swim speed within a narrow range, despite changes in buoyancy. During active swimming, all three seals increased the amplitude of lateral body accelerations and two of the seals altered stroke frequency in response to the need to produce thrust required to overcome combined drag and buoyancy forces.

  17. Estimating the rates of mass change, ice volume change and snow volume change in Greenland from ICESat and GRACE data

    NASA Astrophysics Data System (ADS)

    Slobbe, D. C.; Ditmar, P.; Lindenbergh, R. C.

    2009-01-01

    The focus of this paper is on the quantification of ongoing mass and volume changes over the Greenland ice sheet. For that purpose, we used elevation changes derived from the Ice, Cloud, and land Elevation Satellite (ICESat) laser altimetry mission and monthly variations of the Earth's gravity field as observed by the Gravity Recovery and Climate Experiment (GRACE) mission. Based on a stand alone processing scheme of ICESat data, the most probable estimate of the mass change rate from 2003 February to 2007 April equals -139 +/- 68 Gtonyr-1. Here, we used a density of 600+/-300 kgm-3 to convert the estimated elevation change rate in the region above 2000m into a mass change rate. For the region below 2000m, we used a density of 900+/-300 kgm-3. Based on GRACE gravity models from half 2002 to half 2007 as processed by CNES, CSR, DEOS and GFZ, the estimated mass change rate for the whole of Greenland ranges between -128 and -218Gtonyr-1. Most GRACE solutions show much stronger mass losses as obtained with ICESat, which might be related to a local undersampling of the mass loss by ICESat and uncertainties in the used snow/ice densities. To solve the problem of uncertainties in the snow and ice densities, two independent joint inversion concepts are proposed to profit from both GRACE and ICESat observations simultaneously. The first concept, developed to reduce the uncertainty of the mass change rate, estimates this rate in combination with an effective snow/ice density. However, it turns out that the uncertainties are not reduced, which is probably caused by the unrealistic assumption that the effective density is constant in space and time. The second concept is designed to convert GRACE and ICESat data into two totally new products: variations of ice volume and variations of snow volume separately. Such an approach is expected to lead to new insights in ongoing mass change processes over the Greenland ice sheet. Our results show for different GRACE solutions a snow volume change of -11 to 155km3yr-1 and an ice loss with a rate of -136 to -292km3yr-1.

  18. The Minimum-Mass Surface Density of the Solar Nebula using the Disk Evolution Equation

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    2005-01-01

    The Hayashi minimum-mass power law representation of the pre-solar nebula (Hayashi 1981, Prog. Theo. Phys.70,35) is revisited using analytic solutions of the disk evolution equation. A new cumulative-planetary-mass-model (an integrated form of the surface density) is shown to predict a smoother surface density compared with methods based on direct estimates of surface density from planetary data. First, a best-fit transcendental function is applied directly to the cumulative planetary mass data with the surface density obtained by direct differentiation. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the planetary data. The latter model indicates a decay rate of r -1/2 in the inner disk followed by a rapid decay which results in a sharper outer boundary than predicted by the minimum mass model. The model is shown to be a good approximation to the finite-size early Solar Nebula and by extension to extra solar protoplanetary disks.

  19. Uranium distribution and 'excessive' U-He ages in iron meteoritic troilite

    NASA Technical Reports Server (NTRS)

    Fisher, D. E.

    1985-01-01

    Fission tracking techniques were used to measure the uranium distribution in meteoritic troilite and graphite. The obtained fission tracking data showed a heterogeneous distribution of tracks with a significant portion of track density present in the form of uranium clusters at least 10 microns in size. The matrix containing the clusters was also heterogeneous in composition with U concentrations of about 0.2-4.7 ppb. U/He ages could not be estimated on the basis of the heterogeneous U distributions, so previously reported estimates of U/He ages in the presolar range are probably invalid.

  20. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  1. Uncertainty quantification and propagation of errors of the Lennard-Jones 12-6 parameters for n-alkanes

    PubMed Central

    Knotts, Thomas A.

    2017-01-01

    Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455

  2. Relationship between wave energy and free energy from pickup ions in the Comet Halley environment

    NASA Technical Reports Server (NTRS)

    Huddleston, D. E.; Johnstone, A. D.

    1992-01-01

    The free energy available from the implanted heavy ion population at Comet Halley is calculated by assuming that the initial unstable velocity space ring distribution of the ions evolves toward a bispherical shell. Ultimately this free energy adds to the turbulence in the solar wind. Upstream and downstream free energies are obtained separately for the conditions observed along the Giotto spacecraft trajectory. The results indicate that the waves are mostly upstream propagating in the solar wind frame. The total free energy density always exceeds the measured wave energy density because, as expected in the nonlinear process of ion scattering, the available energy is not all immediately released. An estimate of the amount which has been released can be obtained from the measured oxygen ion distributions and again it exceeds that observed. The theoretical analysis is extended to calculate the k spectrum of the cometary-ion-generated turbulence.

  3. Glass Formation of n-Butanol: Coarse-grained Molecular Dynamics Simulations Using Gay-Berne Potential Model

    NASA Astrophysics Data System (ADS)

    Xie, Gui-long; Zhang, Yong-hong; Huang, Shi-ping

    2012-04-01

    Using coarse-grained molecular dynamics simulations based on Gay-Berne potential model, we have simulated the cooling process of liquid n-butanol. A new set of GB parameters are obtained by fitting the results of density functional theory calculations. The simulations are carried out in the range of 290-50 K with temperature decrements of 10 K. The cooling characteristics are determined on the basis of the variations of the density, the potential energy and orientational order parameter with temperature, whose slopes all show discontinuity. Both the radial distribution function curves and the second-rank orientational correlation function curves exhibit splitting in the second peak. Using the discontinuous change of these thermodynamic and structure properties, we obtain the glass transition at an estimate of temperature Tg=120±10 K, which is in good agreement with experimental results 110±1 K.

  4. Fluid inclusion geothermometry

    USGS Publications Warehouse

    Cunningham, C.G.

    1977-01-01

    Fluid inclusions trapped within crystals either during growth or at a later time provide many clues to the histories of rocks and ores. Estimates of fluid-inclusion homogenization temperature and density can be obtained using a petrographic microscope with thin sections, and they can be refined using heating and freezing stages. Fluid inclusion studies, used in conjunction with paragenetic studies, can provide direct data on the time and space variations of parameters such as temperature, pressure, density, and composition of fluids in geologic environments. Changes in these parameters directly affect the fugacity, composition, and pH of fluids, thus directly influencing localization of ore metals. ?? 1977 Ferdinand Enke Verlag Stuttgart.

  5. Clinical evaluation of melanomas and common nevi by spectral imaging

    PubMed Central

    Diebele, Ilze; Kuzmina, Ilona; Lihachev, Alexey; Kapostinsh, Janis; Derjabo, Alexander; Valeine, Lauma; Spigulis, Janis

    2012-01-01

    A clinical trial on multi-spectral imaging of malignant and non-malignant skin pathologies comprising 17 melanomas and 65 pigmented common nevi was performed. Optical density data of skin pathologies were obtained in the spectral range 450–950 nm using the multispectral camera Nuance EX. An image parameter and maps capable of distinguishing melanoma from pigmented nevi were proposed. The diagnostic criterion is based on skin optical density differences at three fixed wavelengths: 540nm, 650nm and 950nm. The sensitivity and specificity of this method were estimated to be 94% and 89%, respectively. The proposed methodology and potential clinical applications are discussed. PMID:22435095

  6. Observations of core-mantle boundary Stoneley modes

    NASA Astrophysics Data System (ADS)

    Koelemeijer, Paula; Deuss, Arwen; Ritsema, Jeroen

    2013-06-01

    Core-mantle boundary (CMB) Stoneley modes represent a unique class of normal modes with extremely strong sensitivity to wave speed and density variations in the D" region. We measure splitting functions of eight CMB Stoneley modes using modal spectra from 93 events with Mw> 7.4 between 1976 and 2011. The obtained splitting function maps correlate well with the predicted splitting calculated for S20RTS+Crust5.1 structure and the distribution of Sdiff and Pdiff travel time anomalies, suggesting that they are robust. We illustrate how our new CMB Stoneley mode splitting functions can be used to estimate density variations in the Earth's lowermost mantle.

  7. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  8. Estimating Small-Body Gravity Field from Shape Model and Navigation Data

    NASA Technical Reports Server (NTRS)

    Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam

    2008-01-01

    This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.

  9. Impact of density information on Rayleigh surface wave inversion results

    NASA Astrophysics Data System (ADS)

    Ivanov, Julian; Tsoflias, Georgios; Miller, Richard D.; Peterie, Shelby; Morton, Sarah; Xia, Jianghai

    2016-12-01

    We assessed the impact of density on the estimation of inverted shear-wave velocity (Vs) using the multi-channel analysis of surface waves (MASW) method. We considered the forward modeling theory, evaluated model sensitivity, and tested the effect of density information on the inversion of seismic data acquired in the Arctic. Theoretical review, numerical modeling and inversion of modeled and real data indicated that the density ratios between layers, not the actual density values, impact the determination of surface-wave phase velocities. Application on real data compared surface-wave inversion results using: a) constant density, the most common approach in practice, b) indirect density estimates derived from refraction compressional-wave velocity observations, and c) from direct density measurements in a borehole. The use of indirect density estimates reduced the final shear-wave velocity (Vs) results typically by 6-7% and the use of densities from a borehole reduced the final Vs estimates by 10-11% compared to those from assumed constant density. In addition to the improved absolute Vs accuracy, the resulting overall Vs changes were unevenly distributed laterally when viewed on a 2-D section leading to an overall Vs model structure that was more representative of the subsurface environment. It was observed that the use of constant density instead of increasing density with depth not only can lead to Vs overestimation but it can also create inaccurate model structures, such as a low-velocity layer. Thus, optimal Vs estimations can be best achieved using field estimates of subsurface density ratios.

  10. Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador

    USGS Publications Warehouse

    Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew

    2017-01-01

    The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.

  11. Pressure Balance at Mars and Solar Wind Interaction with the Martian Atmosphere

    NASA Technical Reports Server (NTRS)

    Krymskii, A. M.; Ness, N. F.; Crider, D. H.; Breus, T. K.; Acuna, M. H.; Hinson, D.

    2003-01-01

    The strongest crustal fields are located in certain regions in the Southern hemisphere. In the Northern hemisphere, the crustal fields are rather weak and usually do not prevent direct interaction between the SW and the Martian ionosphere/atmosphere. Exceptions occur in the isolated mini-magnetospheres formed by the crustal anomalies. Electron density profiles of the ionosphere of Mars derived from radio occultation data obtained by the Radio Science Mars Global Surveyor (MGS) experiment have been compared with the crustal magnetic fields measured by the MGS Magnetometer/Electron Reflectometer (MAG/ER) experiment. A study of 523 electron density profiles obtained at latitudes from +67 deg. to +77 deg. has been conducted. The effective scale-height of the electron density for two altitude ranges, 145-165 km and 165-185 km, and the effective scale-height of the neutral atmosphere density in the vicinity of the ionization peak have been derived for each of the profiles studied. For the regions outside of the potential mini-magnetospheres, the thermal pressure of the ionospheric plasma for the altitude range 145-185 km has been estimated. In the high latitude ionosphere at Mars, the total pressure at altitudes 160 and 180 km has been mapped. The solar wind interaction with the ionosphere of Mars and origin of the sharp drop of the electron density at the altitudes 200-210 km will be discussed.

  12. [Rapid prediction of annual ring density of Paulownia elongate standing tress using near infrared spectroscopy].

    PubMed

    Jiang, Ze-Hui; Wang, Yu-Rong; Fei, Ben-Hua; Fu, Feng; Hse, Chung-Yun

    2007-06-01

    Rapid prediction of annual ring density of Paulownia elongate standing trees using near infrared spectroscopy was studied. It was non-destructive to collect the samples for trees, that is, the wood cores 5 mm in diameter were unthreaded at the breast height of standing trees instead of fallen trees. Then the spectra data were collected by autoscan method of NIR. The annual ring density was determined by mercury immersion. And the models were made and analyzed by the partial least square (PLS) and full cross validation in the 350-2 500 nm wavelength range. The results showed that high coefficients were obtained between the annual ring and the NIR fitted data. The correlation coefficient of prediction model was 0.88 and 0.91 in the middle diameter and bigger diameter, respectively. Moreover, high coefficients of correlation were also obtained between annual ring density laboratory-determined and the NIR fitted data in the middle diameter of Paulownia elongate standing trees, the correlation coefficient of calibration model and prediction model were 0.90 and 0.83, and the standard errors of calibration (SEC) and standard errors of prediction(SEP) were 0.012 and 0.016, respectively. The method can simply, rapidly and non-destructively estimate the annual ring density of the Paulownia elongate standing trees close to the cutting age.

  13. Estimations of pollution emissions by the Moscow megapolis basing on in-situ measurements and optical remote sensing

    NASA Astrophysics Data System (ADS)

    Elansky, N.; Postylyakov, O.; Verevkin, Y.; Volobuev, L.; Ponomarev, N.

    2017-11-01

    By the present a large amount of data has been accumulated on direct measurements of the pollution and thermodynamic state of the atmosphere in the Moscow region, which was obtained at stations of Roshydromet, Mosecomonitoring, A.M.Obukhov Institute of Atmospheric Physics (OIAP), M.V. Lomonosov Moscow State University, NPO Typhoon, what allows estimating pollution emissions based on measurements and correcting existing emission inventories, which are evaluated mainly on indirect data connected with population density, fuel consumption, etc. Within the framework of the project, the whole volume of data on the concentration of ground contaminants CO, NOx, SO2, CH4, obtained at regularly operated Moscow Ecological Monitoring stations and at OIAP stations from 2005 to 2014, was systematized. Observation data on pollution concentrations are supplemented by measurements of their integral content in the atmospheric boundary layer, obtained by differential spectroscopy methods (MAX DOAS, ZDOAS) at stationary stations and by passing Moscow with DOAS-equipped car. The paper present preliminary estimates of pollution emissions in the Moscow region, obtained on the basis of the collected array of experimental data. The estimations of pollutant emissions from Moscow were obtained experimentally in a few ways: (1) on the basis of network observations of surface concentrations, (2) on the basis of measurements in the atmospheric layer 0-348 m at Ostankino TV tower, (3) on the basis of the integral pollutant (NO2) content in ABL obtained by DOAS technique from stationary stations, and (4) using a car with DOAS equipment traveling over the closed route around Moscow (for NO2). All experimental approaches yielded close values of pollution emissions for Moscow. Trends in emissions of CO, NOx, and CH4 are negative, and the trend of SO2 emission is positive from 2005 to 2014.

  14. Investigating the Role of Gravity Wave on Equatorial Ionospheric Irregularities using SABER and C/NOFS Satellites Observations

    NASA Astrophysics Data System (ADS)

    Nigussie, M.; Damtie, B.; Moldwin, M.; Yizengaw, E.; Tesema, F.; Tebabal, A.

    2017-12-01

    Theoretical simulations have shown that gravity wave (GW) seeded perturbations amplified by Rayleigh-Taylor Instability (RTI) results in ESF (equatorial spread F); however, there have been limited observational studies using simultaneous observations of GW and ionospheric parameters. In this paper, for the fist time, simultaneous atmospheric temperature perturbation profiles that are due to GWs obtained from Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) on-board the TIMED satellite and equatorial in -situ ion density and vertical plasma drift velocity observations with and without ESF activity obtained from C/NOFS satellites are used to investigate the effect of GW on the generation of ESF. The horizontal and vertical wavelengths of ionospheric oscillations and GWs respectively have been estimated applying wavelet transforms. Cross wavelet analysis has also been applied between two closely observed profiles of temperature perturbations to estimate the horizontal wavelength of the GWs. Moreover, vertically propagating GWs that dissipate energy at the upper atmosphere have been investigated using spectral analysis compared with theoretical results. The analysis show that when the ion density shows strong post sunset irregularity between 20 and 24 LT, vertically upward drift velocities increase between 17 and 19 LT, but it becomes vertically downward when the ion density shows smooth variation. The horizontal wavelengths estimated from C/NOFS and SABER observations show excellent agreement when ion density observations show strong fluctuations; otherwise, they have poor agreement. It is also found that altitude profiles of potential energy of GW increases up to 90 km and then decreases significantly. It is found that the vertical wavelength of GW, corresponding to the dominant spectral power, ranges from about 7 km to 20 km regardless of the situation of the ionosphere; however, GWs with vertical wavelengths between 100 m to 1 km are found to be saturated between 90 and 110 km whether the ionosphere exhibits irregularity or not. The above results imply that ESF is due to the amplification of perturbations as a result of energy dissipation from GW with vertical wavelength 100 m to 1 km by the RTI that is mainly controlled by Pre-Reversal Enhancement of the zonal electric field.

  15. Saturn Ring Rain: New Observations and Estimates of Water Influx

    NASA Astrophysics Data System (ADS)

    Moore, L.; O'Donoghue, J.; Mueller-Wodarg, I.; Galand, M.; Mendillo, M.

    2014-04-01

    We estimate the maximum rates of water influx from Saturn's rings based on ionospheric model reproductions of derived H3+ column densities. On 17 April 2011 over two hours of near-infrared spectral data were obtained of Saturn using the Near InfraRed Spectrograph (NIRSPEC) instrument on the 10-m Keck II telescope. Two bright H3+ rotationalvibrational emission lines were visible nearly from pole to pole, allowing low-latitude ionospheric emissions to be studied for the first time, and revealing significant latitudinal structure, with local extrema in one hemisphere being mirrored at magnetically conjugate latitudes in the opposite hemisphere. In addition, those minima and maxima mapped to latitudes of increased or decreased density, respectively, in Saturn's rings, implying a direct ringatmosphere connection in which charged water group particles from the rings are guided by magnetic field lines as they "rain" down upon the atmosphere. Water products act to quench the local ionosphere, and therefore modify the H3+ densities and their observed emissions. Using the Saturn Thermosphere Ionosphere Model (STIM), a 3-D model of Saturn's upper atmosphere, we derive the maximum rates of water influx required from the rings in order to reproduce the H3+ column densities observed on 17 April 2011. We estimate the globally averaged maximum ringderived water influx to be (1.6-12)x105 cm-2 sec-1, which represents a maximum total global influx of water from Saturn's rings to its atmosphere of (1.0-6.8)x1026 sec-1. We will also present the initial findings of Keck ring rain observing campaigns from April 2013 and May 2014.

  16. Saturn’s Ring Rain: Initial Estimates of Ring Mass Loss Rates

    NASA Astrophysics Data System (ADS)

    Moore, Luke; O'Donoghue, J.; Mueller-Wodarg, I.; Mendillo, M.

    2013-10-01

    We estimate rates of mass loss from Saturn’s rings based on ionospheric model reproductions of derived H3+ column densities. On 17 April 2011 over two hours of near-infrared spectral data were obtained of Saturn using the Near InfraRed Spectrograph (NIRSPEC) instrument on the 10-m Keck II telescope. The intensity of two bright H3+ rotational-vibrational emission lines was visible from nearly pole to pole, allowing low-latitude ionospheric emissions to be studied for the first time, and revealing significant latitudinal structure, with local extrema in one hemisphere being mirrored at magnetically conjugate latitudes in the opposite hemisphere. Even more striking, those minima and maxima mapped to latitudes of increased or increased density in Saturn’s rings, implying a direct ring-atmosphere connection in which charged water group particles from the rings are guided by magnetic field lines as they “rain” down upon the atmosphere. Water products act to quench the local ionosphere, and therefore modify the observed H3+ densities. Using the Saturn Thermosphere Ionosphere Model (STIM), a 3-D model of Saturn’s upper atmosphere, we derive the rates of water influx required from the rings in order to reproduce the observed H3+ column densities. As a unique pair of conjugate latitudes map to a specific radial distance in the ring plane, the derived water influxes can equivalently be described as rates of ring mass erosion as a function of radial distance in the ring plane, and therefore also allow for an improved estimate of the lifetime of Saturn’s rings.

  17. Dispersion patterns and sampling plans for Diaphorina citri (Hemiptera: Psyllidae) in citrus.

    PubMed

    Sétamou, Mamoudou; Flores, Daniel; French, J Victor; Hall, David G

    2008-08-01

    The abundance and spatial dispersion of Diaphorina citri Kuwayama (Hemiptera: Psyllidae) were studied in 34 grapefruit (Citrus paradisi Macfad.) and six sweet orange [Citrus sinensis (L.) Osbeck] orchards from March to August 2006 when the pest is more abundant in southern Texas. Although flush shoot infestation levels did not vary with host plant species, densities of D. citri eggs, nymphs, and adults were significantly higher on sweet orange than on grapefruit. D. citri immatures also were found in significantly higher numbers in the southeastern quadrant of trees than other parts of the canopy. The spatial distribution of D. citri nymphs and adults was analyzed using Iowa's patchiness regression and Taylor's power law. Taylor's power law fitted the data better than Iowa's model. Based on both regression models, the field dispersion patterns of D. citri nymphs and adults were aggregated among flush shoots in individual trees as indicated by the regression slopes that were significantly >1. For the average density of each life stage obtained during our surveys, the minimum number of flush shoots per tree needed to estimate D. citri densities varied from eight for eggs to four flush shoots for adults. Projections indicated that a sampling plan consisting of 10 trees and eight flush shoots per tree would provide density estimates of the three developmental stages of D. citri acceptable enough for population studies and management decisions. A presence-absence sampling plan with a fixed precision level was developed and can be used to provide a quick estimation of D. citri populations in citrus orchards.

  18. The Correlation Between Porosity, Density and Degree of Serpentinization in Ophiolites from Point Sal, California: Implications for Strength of Oceanic Lithosphere

    NASA Astrophysics Data System (ADS)

    Karrasch, A. K.; Farough, A.; Lowell, R. P.

    2017-12-01

    Hydration and serpentinization of oceanic lithosphere influences its strength and behavior under stress. Serpentine content is the limiting factor in deformation and the correlation between crustal strength and the degree of serpentinization is not linear. Escartin et al., [2001] shows that the presence of only 10% serpentine results in a nominally non-dilatant mode of brittle deformation and reduces the strength of peridotites dramatically. In this study, we measured density and porosity of ophiolite samples from Point Sal, CA that had various degrees of serpentinization. The densities ranged between 2500- 3000 kg/m3 and porosities ranged between 2.1-4.8%. The degree of serpentinization was estimated from mineralogical analysis, and these data were combined with that of 4 other samples analyzed by Farough et al., [2016], which were obtained from various localities. The degree of serpentinization varied between 0.6 and 40%. We found that degree of serpentinization was inversely correlated with density with a slope of 7.25 (kg/m3)/%. Using Horen et al., [1996] models, estimated P-wave velocity of the samples ranged between 6.75-7.90 km/s and S-wave velocity ranged between 3.58-4.35 km/s. There were no distinguishable difference in the results between olivine-rich or pyroxene-rich samples. These results, along with correlations to strength and deformation style, can be used as a reference for mechanical properties of the crust at depth, analysis of deep drill cores and to estimate the rate of weakening of the oceanic crust after the onset of serpentinization reactions.

  19. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles

    PubMed Central

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845

  20. Constraining planetary atmospheric density: application of heuristic search algorithms to aerodynamic modeling of impact ejecta trajectories

    NASA Astrophysics Data System (ADS)

    Liu, Z. Y. C.; Shirzaei, M.

    2015-12-01

    Impact craters on the terrestrial planets are typically surrounded by a continuous ejecta blanket that the initial emplacement is via ballistic sedimentation. Following an impact event, a significant volume of material is ejected and falling debris surrounds the crater. Aerodynamics rule governs the flight path and determines the spatial distribution of these ejecta. Thus, for the planets with atmosphere, the preserved ejecta deposit directly recorded the interaction of ejecta and atmosphere at the time of impact. In this study, we develop a new framework to establish links between distribution of the ejecta, age of the impact and the properties of local atmosphere. Given the radial distance of the continuous ejecta extent from crater, an inverse aerodynamic modeling approach is employed to estimate the local atmospheric drags and density as well as the lift forces at the time of impact. Based on earlier studies, we incorporate reasonable value ranges for ejection angle, initial velocity, aerodynamic drag, and lift in the model. In order to solve the trajectory differential equations, obtain the best estimate of atmospheric density, and the associated uncertainties, genetic algorithm is applied. The method is validated using synthetic data sets as well as detailed maps of impact ejecta associated with five fresh martian and two lunar impact craters, with diameter of 20-50 m, 10-20 m, respectively. The estimated air density for martian carters range 0.014-0.028 kg/m3, consistent with the recent surface atmospheric density measurement of 0.015-0.020 kg/m3. This constancy indicates the robustness of the presented methodology. In the following, the inversion results for the lunar craters yield air density of 0.003-0.008 kg/m3, which suggest the inversion results are accurate to the second decimal place. This framework will be applied to older martian craters with preserved ejecta blankets, which expect to constrain the long-term evolution of martian atmosphere.

  1. Stochastic seasonality and nonlinear density-dependent factors regulate population size in an African rodent

    USGS Publications Warehouse

    Leirs, H.; Stenseth, N.C.; Nichols, J.D.; Hines, J.E.; Verhagen, R.; Verheyen, W.

    1997-01-01

    Ecology has long been troubled by the controversy over how populations are regulated. Some ecologists focus on the role of environmental effects, whereas others argue that density-dependent feedback mechanisms are central. The relative importance of both processes is still hotly debated, but clear examples of both processes acting in the same population are rare. Keyfactor analysis (regression of population changes on possible causal factors) and time-series analysis are often used to investigate the presence of density dependence, but such approaches may be biased and provide no information on actual demographic rates. Here we report on both density-dependent and density-independent effects in a murid rodent pest species, the multimammate rat Mastomys natalensis (Smith, 1834), using statistical capture-recapture models. Both effects occur simultaneously, but we also demonstrate that they do not affect all demographic rates in the same way. We have incorporated the obtained estimates of demographic rates in a population dynamics model and show that the observed dynamics are affected by stabilizing nonlinear density-dependent components coupled with strong deterministic and stochastic seasonal components.

  2. Systematic influences of gamma-ray spectrometry data near the decision threshold for radioactivity measurements in the environment.

    PubMed

    Zorko, Benjamin; Korun, Matjaž; Mora Canadas, Juan Carlos; Nicoulaud-Gouin, Valerie; Chyly, Pavol; Blixt Buhr, Anna Maria; Lager, Charlotte; Aquilonius, Karin; Krajewski, Pawel

    2016-07-01

    Several methods for reporting outcomes of gamma-ray spectrometric measurements of environmental samples for dose calculations are presented and discussed. The measurement outcomes can be reported as primary measurement results, primary measurement results modified according to the quantification limit, best estimates obtained by the Bayesian posterior (ISO 11929), best estimates obtained by the probability density distribution resembling shifting, and the procedure recommended by the European Commission (EC). The annual dose is calculated from the arithmetic average using any of these five procedures. It was shown that the primary measurement results modified according to the quantification limit could lead to an underestimation of the annual dose. On the other hand the best estimates lead to an overestimation of the annual dose. The annual doses calculated from the measurement outcomes obtained according to the EC's recommended procedure, which does not cope with the uncertainties, fluctuate between an under- and overestimation, depending on the frequency of the measurement results that are larger than the limit of detection. In the extreme case, when no measurement results above the detection limit occur, the average over primary measurement results modified according to the quantification limit underestimates the average over primary measurement results for about 80%. The average over best estimates calculated according the procedure resembling shifting overestimates the average over primary measurement results for 35%, the average obtained by the Bayesian posterior for 85% and the treatment according to the EC recommendation for 89%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, Paul B.

    Paralleling our recent computationally intensive (quasi-Monte Carlo) work for the case N=4 (e-print quant-ph/0308037), we undertake the task for N=6 of computing to high numerical accuracy, the formulas of Sommers and Zyczkowski (e-print quant-ph/0304041) for the (N{sup 2}-1)-dimensional volume and (N{sup 2}-2)-dimensional hyperarea of the (separable and nonseparable) NxN density matrices, based on the Bures (minimal monotone) metric--and also their analogous formulas (e-print quant-ph/0302197) for the (nonmonotone) flat Hilbert-Schmidt metric. With the same seven 10{sup 9} well-distributed ('low-discrepancy') sample points, we estimate the unknown volumes and hyperareas based on five additional (monotone) metrics of interest, including the Kubo-Mori and Wigner-Yanase.more » Further, we estimate all of these seven volume and seven hyperarea (unknown) quantities when restricted to the separable density matrices. The ratios of separable volumes (hyperareas) to separable plus nonseparable volumes (hyperareas) yield estimates of the separability probabilities of generically rank-6 (rank-5) density matrices. The (rank-6) separability probabilities obtained based on the 35-dimensional volumes appear to be--independently of the metric (each of the seven inducing Haar measure) employed--twice as large as those (rank-5 ones) based on the 34-dimensional hyperareas. (An additional estimate--33.9982--of the ratio of the rank-6 Hilbert-Schmidt separability probability to the rank-4 one is quite clearly close to integral too.) The doubling relationship also appears to hold for the N=4 case for the Hilbert-Schmidt metric, but not the others. We fit simple exact formulas to our estimates of the Hilbert-Schmidt separable volumes and hyperareas in both the N=4 and N=6 cases.« less

  4. Quantitative in vivo receptor binding. III. Tracer kinetic modeling of muscarinic cholinergic receptor binding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frey, K.A.; Hichwa, R.D.; Ehrenkaufer, R.L.

    1985-10-01

    A tracer kinetic method is developed for the in vivo estimation of high-affinity radioligand binding to central nervous system receptors. Ligand is considered to exist in three brain pools corresponding to free, nonspecifically bound, and specifically bound tracer. These environments, in addition to that of intravascular tracer, are interrelated by a compartmental model of in vivo ligand distribution. A mathematical description of the model is derived, which allows determination of regional blood-brain barrier permeability, nonspecific binding, the rate of receptor-ligand association, and the rate of dissociation of bound ligand, from the time courses of arterial blood and tissue tracer concentrations.more » The term ''free receptor density'' is introduced to describe the receptor population measured by this method. The technique is applied to the in vivo determination of regional muscarinic acetylcholine receptors in the rat, with the use of (TH)scopolamine. Kinetic estimates of free muscarinic receptor density are in general agreement with binding capacities obtained from previous in vivo and in vitro equilibrium binding studies. In the striatum, however, kinetic estimates of free receptor density are less than those in the neocortex--a reversal of the rank ordering of these regions derived from equilibrium determinations. A simplified model is presented that is applicable to tracers that do not readily dissociate from specific binding sites during the experimental period.« less

  5. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  6. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  7. Static compression of Fe 0.83Ni 0.09Si 0.08 alloy to 374 GPa and Fe 0.93Si 0.07 alloy to 252 GPa: Implications for the Earth's inner core

    NASA Astrophysics Data System (ADS)

    Asanuma, Hidetoshi; Ohtani, Eiji; Sakai, Takeshi; Terasaki, Hidenori; Kamada, Seiji; Hirao, Naohisa; Ohishi, Yasuo

    2011-10-01

    The pressure-volume equations of state of iron-nickel-silicon alloy Fe 0.83Ni 0.09Si 0.08 (Fe-9.8 wt.% Ni-4.0 wt.% Si) and iron-silicon alloy Fe 0.93Si 0.07 (Fe-3.4 wt.% Si) have been investigated up to 374 GPa and 252 GPa, respectively. The present compression data covered pressures of the Earth's core. We confirmed that both Fe 0.83Ni 0.09Si 0.08 and Fe 0.93Si 0.07 alloys remain in the hexagonal close packed structure at all pressures studied. We obtained the density of these alloys at the pressure of the inner core boundary (ICB), 330 GPa at 300 K by fitting the compression data to the third order Birch-Murnaghan equation of state. Using these density values combined with the previous data for hcp-Fe, hcp-Fe 0.8Ni 0.2, and hcp-Fe 0.84Si 0.16 alloys and comparing with the density of the PREM inner core, we estimated the Ni and Si contents of the inner core. The Si content of the inner core estimated here is slightly greater than that estimated previously based on the sound velocity measurement of the hcp-Fe-Ni-Si alloy at high pressure.

  8. Evolution of the substructure of a novel 12% Cr steel under creep conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Surya Deo, E-mail: surya.yadav@tugraz.at; Kalácska, Szilvia, E-mail: kalacska@metal.elte.hu; Dománková, Mária, E-mail: maria.domankova@stuba.sk

    2016-05-15

    In this work we study the microstruture evolution of a newly developed 12% Cr martensitic/ferritic steel in as-received condition and after creep at 650 °C under 130 MPa and 80 MPa. The microstructure is described as consisting of mobile dislocations, dipole dislocations, boundary dislocations, precipitates, lath boundaries, block boundaries, packet boundaries and prior austenitic grain boundaries. The material is characterized employing light optical microscopy (LOM), scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and electron backscatter diffraction (EBSD). TEM is used to characterize the dislocations (mobile + dipole) inside the subgrains and XRD measurements are used tomore » the characterize mobile dislocations. Based on the subgrain boundary misorientations obtained from EBSD measurements, the boundary dislocation density is estimated. The total dislocation density is estimated for the as-received and crept conditions adding the mobile, boundary and dipole dislocation densities. Additionally, the subgrain size is estimated from the EBSD measurements. In this publication we propose the use of three characterization techniques TEM, XRD and EBSD as necessary to characterize all type of dislocations and quantify the total dislocation densty in martensitic/ferritic steels. - Highlights: • Creep properties of a novel 12% Cr steel alloyed with Ta • Experimental characterization of different types of dislocations: mobile, dipole and boundary • Characterization and interpretation of the substructure evolution using unique combination of TEM, XRD and EBSD.« less

  9. Redox potential distribution of an organic-rich contaminated site obtained by the inversion of self-potential data

    NASA Astrophysics Data System (ADS)

    Abbas, M.; Jardani, A.; Soueid Ahmed, A.; Revil, A.; Brigaud, L.; Bégassat, Ph.; Dupont, J. P.

    2017-11-01

    Mapping the redox potential of shallow aquifers impacted by hydrocarbon contaminant plumes is important for the characterization and remediation of such contaminated sites. The redox potential of groundwater is indicative of the biodegradation of hydrocarbons and is important in delineating the shapes of contaminant plumes. The self-potential method was used to reconstruct the redox potential of groundwater associated with an organic-rich contaminant plume in northern France. The self-potential technique is a passive technique consisting in recording the electrical potential distribution at the surface of the Earth. A self-potential map is essentially the sum of two contributions, one associated with groundwater flow referred to as the electrokinetic component, and one associated with redox potential anomalies referred to as the electroredox component (thermoelectric and diffusion potentials are generally negligible). A groundwater flow model was first used to remove the electrokinetic component from the observed self-potential data. Then, a residual self-potential map was obtained. The source current density generating the residual self-potential signals is assumed to be associated with the position of the water table, an interface characterized by a change in both the electrical conductivity and the redox potential. The source current density was obtained through an inverse problem by minimizing a cost function including a data misfit contribution and a regularizer. This inversion algorithm allows the determination of the vertical and horizontal components of the source current density taking into account the electrical conductivity distribution of the saturated and non-saturated zones obtained independently by electrical resistivity tomography. The redox potential distribution was finally determined from the inverted residual source current density. A redox map was successfully built and the estimated redox potential values correlated well with in-situ measurements.

  10. High current density ion beam obtained by a transition to a highly focused state in extremely low-energy region.

    PubMed

    Hirano, Y; Kiyama, S; Fujiwara, Y; Koguchi, H; Sakakita, H

    2015-11-01

    A high current density (≈3 mA/cm(2)) hydrogen ion beam source operating in an extremely low-energy region (E(ib) ≈ 150-200 eV) has been realized by using a transition to a highly focused state, where the beam is extracted from the ion source chamber through three concave electrodes with nominal focal lengths of ≈350 mm. The transition occurs when the beam energy exceeds a threshold value between 145 and 170 eV. Low-level hysteresis is observed in the transition when E(ib) is being reduced. The radial profiles of the ion beam current density and the low temperature ion current density can be obtained separately using a Faraday cup with a grid in front. The measured profiles confirm that more than a half of the extracted beam ions reaches the target plate with a good focusing profile with a full width at half maximum of ≈3 cm. Estimation of the particle balances in beam ions, the slow ions, and the electrons indicates the possibility that the secondary electron emission from the target plate and electron impact ionization of hydrogen may play roles as particle sources in this extremely low-energy beam after the compensation of beam ion space charge.

  11. Note: Real-time monitoring via second-harmonic interferometry of a flow gas cell for laser wakefield acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandi, F., E-mail: fernando.brandi@ino.it; Istituto Italiano di Tecnologia; Giammanco, F.

    2016-08-15

    The use of a gas cell as a target for laser wakefield acceleration (LWFA) offers the possibility to obtain stable and manageable laser-plasma interaction process, a mandatory condition for practical applications of this emerging technique, especially in multi-stage accelerators. In order to obtain full control of the gas particle number density in the interaction region, thus allowing for a long term stable and manageable LWFA, real-time monitoring is necessary. In fact, the ideal gas law cannot be used to estimate the particle density inside the flow cell based on the preset backing pressure and the room temperature because the gasmore » flow depends on several factors like tubing, regulators, and valves in the gas supply system, as well as vacuum chamber volume and vacuum pump speed/throughput. Here, second-harmonic interferometry is applied to measure the particle number density inside a flow gas cell designed for LWFA. The results demonstrate that real-time monitoring is achieved and that using low backing pressure gas (<1 bar) and different cell orifice diameters (<2 mm) it is possible to finely tune the number density up to the 10{sup 19} cm{sup −3} range well suited for LWFA.« less

  12. Temporal variation in bird counts within a Hawaiian rainforest

    USGS Publications Warehouse

    Simon, John C.; Pratt, T.K.; Berlin, Kim E.; Kowalsky, James R.; Fancy, S.G.; Hatfield, J.S.

    2002-01-01

    We studied monthly and annual variation in density estimates of nine forest bird species along an elevational gradient in an east Maui rainforest. We conducted monthly variable circular-plot counts for 36 consecutive months along transects running downhill from timberline. Density estimates were compared by month, year, and station for all resident bird species with sizeable populations, including four native nectarivores, two native insectivores, a non-native insectivore, and two non-native generalists. We compared densities among three elevational strata and between breeding and nonbreeding seasons. All species showed significant differences in density estimates among months and years. Three native nectarivores had higher density estimates within their breeding season (December-May) and showed decreases during periods of low nectar production following the breeding season. All insectivore and generalist species except one had higher density estimates within their March-August breeding season. Density estimates also varied with elevation for all species, and for four species a seasonal shift in population was indicated. Our data show that the best time to conduct counts for native forest birds on Maui is January-February, when birds are breeding or preparing to breed, counts are typically high, variability in density estimates is low, and the likelihood for fair weather is best. Temporal variations in density estimates documented in our study site emphasize the need for consistent, well-researched survey regimens and for caution when drawing conclusions from, or basing management decisions on, survey data.

  13. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  14. Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys

    Treesearch

    Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik

    2011-01-01

    The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...

  15. Prey versus substrate as determinants of habitat choice in a feeding shorebird

    NASA Astrophysics Data System (ADS)

    Finn, Paul G.; Catterall, Carla P.; Driscoll, Peter V.

    2008-11-01

    Many shorebirds on their non-breeding grounds feed on macrobenthic fauna which become available at low tide in coastal intertidal flats. The Eastern Curlew Numenius madagascariensis in Moreton Bay Australia, varies greatly in density among different tidal flats. This study asks: how important is the abundance of intertidal prey as a predictor of this variation? We quantified feeding curlews' diet across 12 sites (different tidal flats, each re-visited at least eight times), through 970 focal observations. We also estimated the abundance of total macrobenthic fauna, potential prey taxa and crustacean prey on each tidal flat; measured as the number of individuals and a relative biomass index per unit substrate surface area obtained from substrate core samples. We estimated curlew density at each site using low-tide surveys from every site visit. Curlew density showed a strong positive association with both the density and biomass of fauna and of potential prey ( r values all around 0.70) across the 12 flats. Associations with crustacean density and biomass were also statistically significant (r values both 0.60). However, these variables also showed a strong negative correlation with a measure of substrate resistance (based on the amount of hard material in the substrate core), which was the best predictor of curlew density ( r = -0.82). Curlews were most abundant at sites with the least resistant substrate, and these sites also generally had the highest faunal density and biomass. When the effect of substrate resistance was statistically removed, curlew density was no longer significantly correlated with fauna density and biomass. This suggests that macro-scale habitat choice by Eastern Curlew on their non-breeding grounds is more strongly influenced by prey availability (which is higher when substrate resistance is lower) than by prey density or biomass, although in Moreton Bay a positive correlation across sites meant that these factors were synergistic.

  16. Toward accurate and precise estimates of lion density.

    PubMed

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and policy decisions. © 2016 Society for Conservation Biology.

  17. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  18. Upper limit set by causality on the tidal deformability of a neutron star

    NASA Astrophysics Data System (ADS)

    Van Oeveren, Eric D.; Friedman, John L.

    2017-04-01

    A principal goal of gravitational-wave astronomy is to constrain the neutron star equation of state (EOS) by measuring the tidal deformability of neutron stars. The tidally induced departure of the waveform from that of a point particle [or a spinless binary black hole (BBH)] increases with the stiffness of the EOS. We show that causality (the requirement that the speed of sound be less than the speed of light for a perfect fluid satisfying a one-parameter equation of state) places an upper bound on tidal deformability as a function of mass. Like the upper mass limit, the limit on deformability is obtained by using an EOS with vsound=c for high densities and matching to a low density (candidate) EOS at a matching density of order nuclear saturation density. We use these results and those of Lackey et al. [Phys. Rev. D 89, 043009 (2014), 10.1103/PhysRevD.89.043009] to estimate the resulting upper limit on the gravitational-wave phase shift of a black hole-neutron star (BHNS) binary relative to a BBH. Even for assumptions weak enough to allow a maximum mass of 4 M⊙ (a match at nuclear saturation density to an unusually stiff low-density candidate EOS), the upper limit on dimensionless tidal deformability is stringent. It leads to a still more stringent estimated upper limit on the maximum tidally induced phase shift prior to merger. We comment in an appendix on the relation among causality, the condition vsound

  19. Langley Atmospheric Information Retrieval System (LAIRS): System description and user's guide

    NASA Technical Reports Server (NTRS)

    Boland, D. E., Jr.; Lee, T.

    1982-01-01

    This document presents the user's guide, system description, and mathematical specifications for the Langley Atmospheric Information Retrieval System (LAIRS). It also includes a description of an optimal procedure for operational use of LAIRS. The primary objective of the LAIRS Program is to make it possible to obtain accurate estimates of atmospheric pressure, density, temperature, and winds along Shuttle reentry trajectories for use in postflight data reduction.

  20. The distribution of seismic velocities and attenuation in the earth. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hart, R. S.

    1977-01-01

    Estimates of the radial distribution of seismic velocities and density and of seismic attenuation within the earth are obtained through inversion of body wave, surface wave, and normal mode data. The effect of attenuation related dispersion on gross earth structure, and on the reliability of eigenperiod identifications is discussed. The travel time baseline discrepancies between body waves and free oscillation models are examined and largely resolved.

  1. A Far-ultraviolet Fluorescent Molecular Hydrogen Emission Map of the Milky Way Galaxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, Young-Soo; Min, Kyoung-Wook; Seon, Kwang-Il

    We present the far-ultraviolet (FUV) fluorescent molecular hydrogen (H{sub 2}) emission map of the Milky Way Galaxy obtained with FIMS/SPEAR covering ∼76% of the sky. The extinction-corrected intensity of the fluorescent H{sub 2} emission has a strong linear correlation with the well-known tracers of the cold interstellar medium (ISM), including color excess E(B–V) , neutral hydrogen column density N (H i), and H α emission. The all-sky H{sub 2} column density map was also obtained using a simple photodissociation region model and interstellar radiation fields derived from UV star catalogs. We estimated the fraction of H{sub 2} ( f {submore » H2}) and the gas-to-dust ratio (GDR) of the diffuse ISM. The f {sub H2} gradually increases from <1% at optically thin regions where E(B–V) < 0.1 to ∼50% for E(B–V)  = 3. The estimated GDR is ∼5.1 × 10{sup 21} atoms cm{sup −2} mag{sup −1}, in agreement with the standard value of 5.8 × 10{sup 21} atoms cm{sup −2} mag{sup −1}.« less

  2. Fully Automated Quantitative Estimation of Volumetric Breast Density from Digital Breast Tomosynthesis Images: Preliminary Results and Comparison with Digital Mammography and MR Imaging.

    PubMed

    Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina

    2016-04-01

    To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.

  3. Synchronic inverse seasonal rhythmus of energy density of food intake and sleep quality: a contribution to chrono-nutrition from a Polish adult population.

    PubMed

    Stelmach-Mardas, M; Iqbal, K; Mardas, M; Schwingshackl, L; Walkowiak, J; Tower, R J; Boeing, H

    2017-06-01

    There is evidence which suggests that sleep behavior and dietary intake are interlinked. Thus, we investigated whether a seasonal rhythm in food-energy density exists, and how this relates to quality of sleep. Two hundred and thirty adult volunteers were investigated across the four seasons. Anthropometrical measurements were obtained and The Pittsburgh Sleep Quality Index was used for an assessment of sleep quality and disturbances. The dietary intake was evaluated using a 24 h dietary recall. Generalized estimating equations were used to estimate seasonal changes in energy density and sleep quality, as well as the association of energy density with sleep quality. All analyses were adjusted for age, sex, education, occupation and shift-work. Mean food energy density was significantly higher in winter as compared with other seasons (P<0.05), although no seasonal variations were observed in macronutrient intake (fat and protein). Overall, the sleep quality was low (score value >5) in all seasons, with the lowest quality occurring in winter and the highest in spring (P<0.05). The components of sleep quality score showed that winter had statistically (P<0.05) poorer subjective sleep quality, sleep latency and sleep disturbances, but lower daytime dysfunction compared with spring and summer. After adjusting for seasonal effects (correlated outcome data) and shift-work, energy density was found to be inversely associated (P<0.0001) with sleep quality. An inverse association between seasonal fluctuation of food energy density and sleep quality was found with winter time, associated with the intake of higher energy dense food products and the lowest sleep quality.

  4. Comparison of breast percent density estimation from raw versus processed digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  5. Cervical vertebral bone mineral density changes in adolescents during orthodontic treatment.

    PubMed

    Crawford, Bethany; Kim, Do-Gyoon; Moon, Eun-Sang; Johnson, Elizabeth; Fields, Henry W; Palomo, J Martin; Johnston, William M

    2014-08-01

    The cervical vertebral maturation (CVM) stages have been used to estimate facial growth status. In this study, we examined whether cone-beam computed tomography images can be used to detect changes of CVM-related parameters and bone mineral density distribution in adolescents during orthodontic treatment. Eighty-two cone-beam computed tomography images were obtained from 41 patients before (14.47 ± 1.42 years) and after (16.15 ± 1.38 years) orthodontic treatment. Two cervical vertebral bodies (C2 and C3) were digitally isolated from each image, and their volumes, means, and standard deviations of gray-level histograms were measured. The CVM stages and mandibular lengths were also estimated after converting the cone-beam computed tomography images. Significant changes for the examined variables were detected during the observation period (P ≤0.018) except for C3 vertebral body volume (P = 0.210). The changes of CVM stage had significant positive correlations with those of vertebral body volume (P ≤0.021). The change of the standard deviation of bone mineral density (variability) showed significant correlations with those of vertebral body volume and mandibular length for C2 (P ≤0.029). The means and variability of the gray levels account for bone mineral density and active remodeling, respectively. Our results indicate that bone mineral density distribution and the volume of the cervical vertebral body changed because of active bone remodeling during maturation. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  6. The evolution of solid density within a thermal explosion. I. Proton radiography of pre-ignition expansion, material motion, and chemical decomposition

    NASA Astrophysics Data System (ADS)

    Smilowitz, L.; Henson, B. F.; Romero, J. J.; Asay, B. W.; Saunders, A.; Merrill, F. E.; Morris, C. L.; Kwiatkowski, K.; Grim, G.; Mariam, F.; Schwartz, C. L.; Hogan, G.; Nedrow, P.; Murray, M. M.; Thompson, T. N.; Espinoza, C.; Lewis, D.; Bainbridge, J.; McNeil, W.; Rightley, P.; Marr-Lyon, M.

    2012-05-01

    We report proton transmission images obtained during direct heating of a sample of PBX 9501 (a plastic bonded formulation of the explosive nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX)) prior to the ignition of a thermal explosion. We describe the application of proton radiography using the 800 MeV proton accelerator at Los Alamos National Laboratory to obtain transmission images in these thermal explosion experiments. We have obtained images at two spatial magnifications and viewing both the radial and the transverse axes of a solid cylindrical sample encased in aluminum. During heating we observe the slow evolution of proton transmission through the samples, with particular detail during material flow associated with the HMX β-δ phase transition. We also directly observe the loss of solid density to decomposition associated with elevated temperatures in the volume defining the ignition location in these experiments. We measure a diameter associated with this volume of 1-2 mm, in agreement with previous estimations of the diameter using spatially resolved fast thermocouples.

  7. A semi-analytical study of positive corona discharge in wire-plane electrode configuration

    NASA Astrophysics Data System (ADS)

    Yanallah, K.; Pontiga, F.; Chen, J. H.

    2013-08-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables.

  8. Novel multireceiver communication systems configurations based on optimal estimation theory

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1992-01-01

    A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.

  9. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  10. Equation of state for detonation product gases

    NASA Astrophysics Data System (ADS)

    Nagayama, Kunihito; Kubota, Shiro

    2003-03-01

    A thermodynamic analysis procedure of the detonation product equation of state (EOS) together with the experimental data set of the detonation velocity as a function of initial density has been formulated. The Chapman-Jouguet (CJ) state [W. Ficket and W. C. Davis, Detonation: Theory and Experiment (University of California Press, Berkeley 1979)] on the p-ν plane is found to be well approximated by the envelope function formed by the collection of Rayleigh lines with many different initial density states. The Jones-Stanyukovich-Manson relation [W. Ficket and W. C. Davis, Detonation: Theory and Experiment (University of California Press, Berkeley, 1979)] is used to estimate the error included in this approximation. Based on this analysis, a simplified integration method to calculate the Grüneisen parameter along the CJ state curve with different initial densities utilizing the cylinder expansion data has been presented. The procedure gives a simple way of obtaining the EOS function, compatible with the detonation velocity data. Theoretical analysis has been performed for the precision of the estimated EOS function. EOS of the pentaerithrytoltetranitrate explosive is calculated and compared with some of the experimental data such as CJ pressure data and cylinder expansion data.

  11. Cortical cell and neuron density estimates in one chimpanzee hemisphere.

    PubMed

    Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H

    2016-01-19

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates.

  12. Influence of the volume and density functions within geometric models for estimating trunk inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A

    2010-02-01

    The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).

  13. Spatially explicit inference for open populations: estimating demographic parameters from camera-trap studies

    USGS Publications Warehouse

    Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J. Andrew

    2010-01-01

    We develop a hierarchical capture–recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture–recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture–recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.

  14. Estimating metallicities with isochrone fits to photometric data of open clusters

    NASA Astrophysics Data System (ADS)

    Monteiro, H.; Oliveira, A. F.; Dias, W. S.; Caetano, T. C.

    2014-10-01

    The metallicity is a critical parameter that affects the correct determination of stellar cluster's fundamental characteristics and has important implications in Galactic and Stellar evolution research. Fewer than 10% of the 2174 currently catalogued open clusters have their metallicity determined in the literature. In this work we present a method for estimating the metallicity of open clusters via non-subjective isochrone fitting using the cross-entropy global optimization algorithm applied to UBV photometric data. The free parameters distance, reddening, age, and metallicity are simultaneously determined by the fitting method. The fitting procedure uses weights for the observational data based on the estimation of membership likelihood for each star, which considers the observational magnitude limit, the density profile of stars as a function of radius from the center of the cluster, and the density of stars in multi-dimensional magnitude space. We present results of [Fe/H] for well-studied open clusters based on distinct UBV data sets. The [Fe/H] values obtained in the ten cases for which spectroscopic determinations were available in the literature agree, indicating that our method provides a good alternative to estimating [Fe/H] by using an objective isochrone fitting. Our results show that the typical precision is about 0.1 dex.

  15. Spatially explicit inference for open populations: estimating demographic parameters from camera-trap studies.

    PubMed

    Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J Andrew

    2010-11-01

    We develop a hierarchical capture-recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture-recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture-recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.

  16. Signature of Europa's Ocean Density on Gravity Data

    NASA Astrophysics Data System (ADS)

    Castillo, J. C.; Rambaux, N.

    2015-12-01

    Observations by the Galileo mission at Europa and Cassini-Huygens mission at Europa, Ganymede, Callisto, Enceladus, and Titan have found deep oceans at these objects with evidence for the presence of salts. Salt compounds are the products of aqueous alteration of the rock phase under hydrothermal conditions and have been predicted theoretically for these objects per analogy with carbonaceous chondrite parent bodies. Evidence for salt enrichment comes from magnetometer measurements (Galilean satellites), direct detection in the case of Enceladus, and inversion of the gravity data obtained at Titan. While there is direct detection for the presence of chlorides in icy grains ejected from Enceladus, the chemistry of the oceans detected so far, or even their densities, remain mostly unconstrained. However the increased ocean density impacts the interpretation of the tidal Love number k2and this may introduce confusion in the inference of the icy shell thickness from that parameter. We will present estimates of k2for a range of assumptions on Europa's hydrospheric structure that build on geophysical observations obtained by the Galileo mission combined with new models of Europa's interior. These models keep track of the compositions of the hydrated core and oceanic composition in a self-consistent manner. We will also estimate the electrical conductivity corresponding to the modeled oceanic composition. Finally we will explore how combining electromagnetic, topographic, and gravity data can decouple the signatures of the shell thickness and ocean composition on these geophysical observations. Acknowledgement: This work is being carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA. Government sponsorship acknowledged.

  17. The Significance of the Record Length in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Senarath, S. U.

    2013-12-01

    Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.

  18. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  19. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  20. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  1. Ponderomotive force on solitary structures created during radiation pressure acceleration of thin foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Vipin K.; Sharma, Anamika

    2013-05-15

    We estimate the ponderomotive force on an expanded inhomogeneous electron density profile, created in the later phase of laser irradiated diamond like ultrathin foil. When ions are uniformly distributed along the plasma slab and electron density obeys the Poisson's equation with space charge potential equal to negative of ponderomotive potential, φ=−φ{sub p}=−(mc{sup 2}/e)(γ−1), where γ=(1+|a|{sup 2}){sup 1/2}, and |a| is the normalized local laser amplitude inside the slab; the net ponderomotive force on the slab per unit area is demonstrated analytically to be equal to radiation pressure force for both overdense and underdense plasmas. In case electron density is takenmore » to be frozen as a Gaussian profile with peak density close to relativistic critical density, the ponderomotive force has non-monotonic spatial variation and sums up on all electrons per unit area to equal radiation pressure force at all laser intensities. The same result is obtained for the case of Gaussian ion density profile and self consistent electron density profile, obeying Poisson's equation with φ=−φ{sub p}.« less

  2. The determination of ionospheric electron content and distribution from satellite observations. Part 2. Results of the analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garriott, O K

    1960-04-01

    The results of observations of the radio transmissions from Sputnik III (1958 δ 2) in an 8-month period are presented. The measurements of integrated electron density are made in two ways, described in part 1. The measurements reveal the diurnal variation of the total ionospheric electron content; and the ratio of the total content to the content of the lower ionosphere below the height of maximum density in the F layer is obtained. An estimate of the average electron-density profile above the F-layer peak is made possible by the slow variation in the height of the satellite due to rotationmore » of the perigee position. The gross effects of large magnetic storms on the electron content and distribution are found.« less

  3. Magnetic levitation in the analysis of foods and water.

    PubMed

    Mirica, Katherine A; Phillips, Scott T; Mace, Charles R; Whitesides, George M

    2010-06-09

    This paper describes a method and a sensor that use magnetic levitation (MagLev) to characterize samples of food and water on the basis of measurements of density. The sensor comprises two permanent NdFeB magnets positioned on top of each other in a configuration with like poles facing and a container filled with a solution of paramagnetic ions. Measurements of density are obtained by suspending a diamagnetic object in the container filled with the paramagnetic fluid, placing the container between the magnets, and measuring the vertical position of the suspended object. MagLev was used to estimate the salinity of water, to compare a variety of vegetable oils on the basis of the ratio of polyunsaturated fat to monounsaturated fat, to compare the contents of fat in milk, cheese, and peanut butter, and to determine the density of grains.

  4. A New Determination of the Luminosity Function of the Galactic Halo.

    NASA Astrophysics Data System (ADS)

    Dawson, Peter Charles

    The luminosity function of the galactic halo is determined by subtracting from the observed numbers of proper motion stars in the LHS Catalogue the expected numbers of main-sequence, degenerate, and giant stars of the disk population. Selection effects are accounted for by Monte Carlo simulations based upon realistic colour-luminosity relations and kinematic models. The catalogue is shown to be highly complete, and a calibration of the magnitude estimates therein is presented. It is found that, locally, the ratio of disk to halo material is close to 950, and that the mass density in main sequence and subgiant halo stars with 3 < M(,v) < 14 is about 2 x 10('-5) M(,o) pc('-3). With due allowance for white dwarfs and binaries, and taking into account the possibility of a moderate rate of halo rotation, it is argued that the total density does not much exceed 5 x 10('-5) M(,o) pc('-3), in which case the total mass interior to the sun is of the order of 5 x 10('8) M(,o) for a density distribution which projects to a de Vaucouleurs r(' 1/4) law. It is demonstrated that if the Wielen luminosity function is a faithful representation of the stellar distribution in the solar neighbourhood, then the observed numbers of large proper motion stars are inconsistent with the presence of an intermediate popula- tion at the level, and with the kinematics advocated recently by Gilmore and Reid. The initial mass function (IMF) of the halo is considered, and weak evidence is presented that its slope is at least not shallower than that of the disk population IMF. A crude estimate of the halo's age, based on a comparison of the main sequence turnoff in the reduced proper motion diagram with theoretical models is obtained; a tentative lower limit is 15 Gyr with a best estimate of between 15 and 18 Gyr. Finally, the luminosity function obtained here is compared with those determined in other investigations.

  5. Constraints on CDM cosmology from galaxy power spectrum, CMB and SNIa evolution

    NASA Astrophysics Data System (ADS)

    Ferramacho, L. D.; Blanchard, A.; Zolnierowski, Y.

    2009-05-01

    Aims: We examine the constraints that can be obtained on standard cold dark matter models from the most currently used data set: CMB anisotropies, type Ia supernovae and the SDSS luminous red galaxies. We also examine how these constraints are widened when the equation of state parameter w and the curvature parameter Ωk are left as free parameters. Finally, we investigate the impact on these constraints of a possible form of evolution in SNIa intrinsic luminosity. Methods: We obtained our results from MCMC analysis using the full likelihood of each data set. Results: For the ΛCDM model, our “vanilla” model, cosmological parameters are tightly constrained and consistent with current estimates from various methods. When the dark energy parameter w is free we find that the constraints remain mostly unchanged, i.e. changes are smaller than the 1 sigma uncertainties. Similarly, relaxing the assumption of a flat universe leads to nearly identical constraints on the dark energy density parameter of the universe Ω_Λ , baryon density of the universe Ω_b, the optical depth τ, the index of the power spectrum of primordial fluctuations n_S, with most one sigma uncertainties better than 5%. More significant changes appear on other parameters: while preferred values are almost unchanged, uncertainties for the physical dark matter density Ω_ch^2, Hubble constant H0 and σ8 are typically twice as large. The constraint on the age of the Universe, which is very accurate for the vanilla model, is the most degraded. We found that different methodological approaches on large scale structure estimates lead to appreciable differences in preferred values and uncertainty widths. We found that possible evolution in SNIa intrinsic luminosity does not alter these constraints by much, except for w, for which the uncertainty is twice as large. At the same time, this possible evolution is severely constrained. Conclusions: We conclude that systematic uncertainties for some estimated quantities are similar or larger than statistical ones.

  6. Evaluation of trapping-web designs

    USGS Publications Warehouse

    Lukacs, P.M.; Anderson, D.R.; Burnham, K.P.

    2005-01-01

    The trapping web is a method for estimating the density and abundance of animal populations. A Monte Carlo simulation study is performed to explore performance of the trapping web for estimating animal density under a variety of web designs and animal behaviours. The trapping performs well when animals have home ranges, even if the home ranges are large relative to trap spacing. Webs should contain at least 90 traps. Trapping should continue for 5-7 occasions. Movement rates have little impact on density estimates when animals are confined to home ranges. Estimation is poor when animals do not have home ranges and movement rates are rapid. The trapping web is useful for estimating the density of animals that are hard to detect and occur at potentially low densities. ?? CSIRO 2005.

  7. Estimating Crustal Properties Directly from Satellite Tracking Data by Using a Topography-based Constraint

    NASA Astrophysics Data System (ADS)

    Goossens, S. J.; Sabaka, T. J.; Genova, A.; Mazarico, E. M.; Nicholas, J. B.; Neumann, G. A.; Lemoine, F. G.

    2017-12-01

    The crust of a terrestrial planet is formed by differentiation processes in its early history, followed by magmatic evolution of the planetary surface. It is further modified through impact processes. Knowledge of the crustal structure can thus place constraints on the planet's formation and evolution. In particular, the average bulk density of the crust is a fundamental parameter in geophysical studies, such as the determination of crustal thickness, studies of the mechanisms of topography support, and the planet's thermo-chemical evolution. Yet even with in-situ samples available, the crustal density is difficult to determine unambiguously, as exemplified by the results for the Gravity and Recovery Interior Laboratory (GRAIL) mission, which found an average crustal density for the Moon that was lower than generally assumed. The GRAIL results were possible owing to the combination of its high-resolution gravity and high-resolution topography obtained by the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter (LRO), and high correlations between the two datasets. The crustal density can be determined by its contribution to the gravity field of a planet, but at long wavelengths flexure effects can dominate. On the other hand, short-wavelength gravity anomalies are difficult to measure, and either not determined well enough (other than at the Moon), or their power is suppressed by the standard `Kaula' regularization constraint applied during inversion of the gravity field from satellite tracking data. We introduce a new constraint that has infinite variance in one direction, called xa . For constraint damping factors that go to infinity, it can be shown that the solution x becomes equal to a scale factor times xa. This scale factor is completely determined by the data, and we call our constraint rank-minus-1 (RM1). If we choose xa to be topography-induced gravity, then we can estimate the average bulk crustal density directly from the data (assuming uncompensated topography). We validate our constraint with pre-GRAIL lunar data, showing that we obtain the same bulk density from data, of much lower resolution than GRAIL's. We will present the results of our new methodology applied to the case of Mars. We will discuss the results, namely an average crustal density lower than generally assumed.

  8. Automatic Cell Segmentation Using a Shape-Classification Model in Immunohistochemically Stained Cytological Images

    NASA Astrophysics Data System (ADS)

    Shah, Shishir

    This paper presents a segmentation method for detecting cells in immunohistochemically stained cytological images. A two-phase approach to segmentation is used where an unsupervised clustering approach coupled with cluster merging based on a fitness function is used as the first phase to obtain a first approximation of the cell locations. A joint segmentation-classification approach incorporating ellipse as a shape model is used as the second phase to detect the final cell contour. The segmentation model estimates a multivariate density function of low-level image features from training samples and uses it as a measure of how likely each image pixel is to be a cell. This estimate is constrained by the zero level set, which is obtained as a solution to an implicit representation of an ellipse. Results of segmentation are presented and compared to ground truth measurements.

  9. Density estimates of monarch butterflies overwintering in central Mexico

    PubMed Central

    Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031

  10. Density estimates of monarch butterflies overwintering in central Mexico

    USGS Publications Warehouse

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  11. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect

    PubMed Central

    Ku, Bon Ki; Evans, Douglas E.

    2015-01-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard’s estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles. PMID:26526560

  12. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect.

    PubMed

    Ku, Bon Ki; Evans, Douglas E

    2012-04-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as "Maynard's estimation method") is used. Therefore, it is necessary to quantitatively investigate how much the Maynard's estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard's estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard's estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard's estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles.

  13. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  14. Radiation pressure dynamics in planetary exospheres - A 'natural' framework

    NASA Technical Reports Server (NTRS)

    Bishop, James; Chamberlain, Joseph W.

    1989-01-01

    Exospheric theory is reformulated to provide for the analysis of dynamical underpinning of exospheric features. The formulation is based on the parabolic-cylindrical separability of the Hamiltonian that describes particle motions in the combined fields of planetary gravity and solar radiation pressure. An approximate solution for trajectory evolution in terms of orbital elements is derived and the role of the exopause in the tail phenomenon is discussed. Also, an expression is obtained for the bound constituent atom densities at outer planetocoronal positions along the planet-sun axis for the case of an evaporative, uniform exobase. This expression is used to estimate midnight density enhancements as a function of radial distance for the terrestrial planets.

  15. On a self-consistent representation of earth models, with an application to the computing of internal flattening

    NASA Astrophysics Data System (ADS)

    Denis, C.; Ibrahim, A.

    Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.

  16. Interference of Tail Surfaces and Wing and Fuselage from Tests of 17 Combinations in the N.A.C.A. Variable-Density Tunnel

    NASA Technical Reports Server (NTRS)

    Sherman, Albert

    1939-01-01

    An investigation of the interference associated with tail surfaces added to wing-fuselage combinations was included in the interference program in progress in the NACA variable-density tunnel. The results indicate that, in aerodynamically clean combinations, the increment to the high-speed drag can be estimated from section characteristics within useful limits of accuracy. The interference appears mainly as effects on the downwash angel and as losses in the tail. An interference burble, which markedly increases the glide-path angle and the stability in pitch before the actual stall, may be considered a means of obtaining satisfactory stalling characteristics for a complete combination.

  17. Layered uranium(VI) hydroxides: structural and thermodynamic properties of dehydrated schoepite α-UO₂(OH)₂.

    PubMed

    Weck, Philippe F; Kim, Eunja

    2014-12-07

    The structure of dehydrated schoepite, α-UO2(OH)2, was investigated using computational approaches that go beyond standard density functional theory and include van der Waals dispersion corrections (DFT-D). Thermal properties of α-UO2(OH)2, were also obtained from phonon frequencies calculated with density functional perturbation theory (DFPT) including van der Waals dispersion corrections. While the isobaric heat capacity computed from first-principles reproduces available calorimetric data to within 5% up to 500 K, some entropy estimates based on calorimetric measurements for UO3·0.85H2O were found to overestimate by up to 23% the values computed in this study.

  18. The distribution of stars around the Milky Way's central black hole. II. Diffuse light from sub-giants and dwarfs

    NASA Astrophysics Data System (ADS)

    Schödel, R.; Gallego-Cano, E.; Dong, H.; Nogueras-Lara, F.; Gallego-Calvente, A. T.; Amaro-Seoane, P.; Baumgardt, H.

    2018-01-01

    Context. This is the second of three papers that search for the predicted stellar cusp around the Milky Way's central black hole, Sagittarius A*, with new data and methods. Aims: We aim to infer the distribution of the faintest stellar population currently accessible through observations around Sagittarius A*. Methods: We used adaptive optics assisted high angular resolution images obtained with the NACO instrument at the ESO VLT. Through optimised PSF fitting we removed the light from all detected stars above a given magnitude limit. Subsequently we analysed the remaining, diffuse light density. Systematic uncertainties were constrained by the use of data from different observing epochs and obtained with different filters. We show that it is necessary to correct for the diffuse emission from the mini-spiral, which would otherwise lead to a systematically biased light density profile. We used a Paschen α map obtained with the Hubble Space Telescope for this purpose. Results: The azimuthally averaged diffuse surface light density profile within a projected distance of R ≲ 0.5 pc from Sagittarius A* can be described consistently by a single power law with an exponent of Γ = 0.26 ± 0.02stat ± 0.05sys, similar to what has been found for the surface number density of faint stars in Paper I. Conclusions: The analysed diffuse light arises from sub-giant and main-sequence stars with Ks ≈ 19-22 with masses of 0.8-1.5 M⊙. These stars can be old enough to be dynamically relaxed. The observed power-law profile and its slope are consistent with the existence of a relaxed stellar cusp around the Milky Way's central black hole. We find that a Nuker law provides an adequate description of the nuclear cluster's intrinsic shape (assuming spherical symmetry). The 3D power-law slope near Sgr A* is γ = 1.13 ± 0.03model ± 0.05sys. The stellar density decreases more steeply beyond a break radius of about 3 pc, which corresponds roughly to the radius of influence of the massive black hole. At a distance of 0.01 pc from the black hole, we estimate a stellar mass density of 2.6 ± 0.3 × 107 M⊙ pc-3 and a total enclosed stellar mass of 180 ± 30 M⊙. These estimates assume a constant mass-to-light ratio and do not take stellar remnants into account. The fact that a flat projected surface density is observed for old giants at projected distances R ≲ 0.3 pc implies that some mechanism may have altered their appearance or distribution.

  19. Can Sgr A* flares reveal the molecular gas density PDF?

    NASA Astrophysics Data System (ADS)

    Churazov, E.; Khabibullin, I.; Sunyaev, R.; Ponti, G.

    2017-11-01

    Illumination of dense gas in the Central Molecular Zone by powerful X-ray flares from Sgr A* leads to prominent structures in the reflected emission that can be observed long after the end of the flare. By studying this emission, we learn about past activity of the supermassive black hole in our Galactic Center and, at the same time, we obtain unique information on the structure of molecular clouds that is essentially impossible to get by other means. Here we discuss how X-ray data can improve our knowledge of both sides of the problem. Existing data already provide (I) an estimate of the flare age, (II) a model-independent lower limit on the luminosity of Sgr A* during the flare and (III) an estimate of the total emitted energy during Sgr A* flare. On the molecular clouds side, the data clearly show a voids-and-walls structure of the clouds and can provide an almost unbiased probe of the mass/density distribution of the molecular gas with the hydrogen column densities lower than few 1023 cm-2. For instance, the probability distribution function of the gas density PDF(ρ) can be measured this way. Future high energy resolution X-ray missions will provide the information on the gas velocities, allowing, for example, a reconstruction of the velocity field structure functions and cross-matching the X-ray and molecular data based on positions and velocities.

  20. Mathematical estimation of melt depth in conduction mode of laser spot remelting process

    NASA Astrophysics Data System (ADS)

    Hadi, Iraj

    2012-12-01

    A one-dimensional mathematical model based on the front tracking method was developed to predict the melt depth as a function of internal and external parameters of laser spot remelting process in conduction mode. Power density, pulse duration, and thermophysical properties of material including thermal diffusivity, melting point, latent heat, and absorption coefficient have been taken into account in the model of this article. By comparing the theoretical results and experimental welding data of commercial pure nickel and titanium plates, the validity of the developed model was examined. Comparison shows a reasonably good agreement between the theory and experiment. For the sake of simplicity, a graphical technique was presented to obtain the melt depth of various materials at any arbitrary amount of power density and pulse duration. In the graphical technique, two dimensionless constants including the Stefan number (Ste) and an introduced constant named laser power factor (LPF) are used. Indeed, all of the internal and external parameters have been gathered in LPF. The effect of power density and pulse duration on the variation of melt depth for different materials such as aluminum, copper, and stainless steel were investigated. Additionally, appropriate expressions were extracted to describe the minimum power density and time to reach melting point in terms of process parameters. A simple expression is also extracted to estimate the thickness of mushy zone for alloys.

  1. The distribution of seabirds and fish in relation to ocean currents in the southeastern Chukchi Sea

    USGS Publications Warehouse

    Piatt, John F.; Wells, John L.; MacCharles, Andrea; Fadely, Brian S.; Montevecchi, W.A.; Gaston, A.J.

    1991-01-01

    In late August 1988, we studied the distribution of seabirds in the southeastern Chukchi Sea, particularly in waters near a major seabird colony at Cape Thompson. Foraging areas were characterized using hydrographic data obtained from hydroacoustic surveys for fish. Murres (Uria spp.) and Black-legged Kitttiwakes Rissa tridactyla breeding at Cape Thompson fed mostly on Arctic cod, which are known from previous studies to be the most abundant pelagic fish in the region. Our hydroacoustic surveys revealed that pelagic fish were distributed widely, but densities were estimated to be low (e.g., 0.1-10 g∙m-3) throughout the study area and a few schools were recorded. Large feeding flocks of murres and kittiwakes were observed over fish schools with densities estimated to exceed 15 g∙m-3. Fish densities were higher in shallow Alaska Coastal Current waters than offshore in Bering Sea waters, and most piscivorous seabirds foraged in coastal waters. Poor kittiwake breeding success and a low frequency of fish in murre and kittiwake stomachs in late August suggested that fish densities were marginal for sustaining breeding seabirds at that time. Planktivorous Least Auklets Aethia pusilla and Parakeet Auklets Cyclorrhynchus psittacula foraged almost exclusively in Bering Sea waters. Short-tailed Shearwaters Puffinus tenuirostris and Tufted Puffins Fratercula cirrhata foraged in transitional waters at the front between Coastal and Bering Sea currents.

  2. Quantum state tomography and fidelity estimation via Phaselift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yiping; Liu, Huan; Zhao, Qing, E-mail: qzhaoyuping@bit.edu.cn

    Experiments of multi-photon entanglement have been performed by several groups. Obviously, an increase on the photon number for fidelity estimation and quantum state tomography causes a dramatic increase in the elements of the positive operator valued measures (POVMs), which results in a great consumption of time in measurements. In practice, we wish to obtain a good estimation of fidelity and quantum states through as few measurements as possible for multi-photon entanglement. Phaselift provides such a chance to estimate fidelity for entangling states based on less data. In this paper, we would like to show how the Phaselift works for sixmore » qubits in comparison to the data given by Pan’s group, i.e., we use a fraction of the data as input to estimate the rest of the data through the obtained density matrix, and thus goes beyond the simple fidelity analysis. The fidelity bound is also provided for general Schrödinger Cat state. Based on the fidelity bound, we propose an optimal measurement approach which could both reduce the copies and keep the fidelity bound gap small. The results demonstrate that the Phaselift can help decrease the measured elements of POVMs for six qubits. Our conclusion is based on the prior knowledge that a pure state is the target state prepared by experiments.« less

  3. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  4. Time-Average Molecular Rayleigh Scattering Technique for Measurement of Velocity, Denisty, Temperature, and Turbulence Intensity in High Speed Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta

    2004-01-01

    A molecular Rayleigh scattering based flow diagnostic is developed to measure time average velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.

  5. A case study of the Thunderstorm Research International Project storm of July 11, 1978. I - Analysis of the data base

    NASA Technical Reports Server (NTRS)

    Nisbet, John S.; Barnard, Theresa A.; Forbes, Gregory S.; Krider, E. Philip; Lhermitte, Roger

    1990-01-01

    The data obtained at the time of the Thunderstorm Research International Project storm at the Kennedy Space Center on July 11, 1978 are analyzed in a model-independent manner. The data base included data from three Doppler radars, a lightning detection and ranging system and a network of 25 electric field mills, and rain gages. Electric field measurements were used to analyze the charge moments transferred by lightning flashes, and the data were fitted to Weibull distributions; these were used to estimate statistical parameters of the lightning for both intracloud and cloud-to-ground flashes and to estimate the fraction of the flashes which were below the observation threshold. The displacement and the conduction current densities were calculated from electric field measurements between flashes. These values were used to derive the magnitudes and the locations of dipole and monopole generators by least squares fitting the measured Maxwell current densities to the displacement-dominated equations.

  6. Shape Models of Asteroids as a Missing Input for Bulk Density Determinations

    NASA Astrophysics Data System (ADS)

    Hanuš, Josef

    2015-07-01

    To determine a meaningful bulk density of an asteroid, both accurate volume and mass estimates are necessary. The volume can be computed by scaling the size of the 3D shape model to fit the disk-resolved images or stellar occultation profiles, which are available in the literature or through collaborations. This work provides a list of asteroids, for which (i) there are already mass estimates with reported uncertainties better than 20% or their mass will be most likely determined in the future from Gaia astrometric observations, and (ii) their 3D shape models are currently unknown. Additional optical lightcurves are necessary to determine the convex shape models of these asteroids. The main aim of this article is to motivate the observers to obtain lightcurves of these asteroids, and thus contribute to their shape model determinations. Moreover, a web page https://asteroid-obs.oca.eu, which maintains an up-to-date list of these objects to assure efficiency and to avoid any overlapping efforts, was created.

  7. Calculations of Hubbard U from first-principles

    NASA Astrophysics Data System (ADS)

    Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.

    2006-09-01

    The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.

  8. Interaction cross sections and matter radii of oxygen isotopes using the Glauber model

    NASA Astrophysics Data System (ADS)

    Ahmad, Suhel; Usmani, A. A.; Ahmad, Shakeb; Khan, Z. A.

    2017-05-01

    Using the Coulomb modified correlation expansion for the Glauber model S matrix, we calculate the interaction cross sections of oxygen isotopes (O-2616) on 12C at 1.0 GeV/nucleon. The densities of O-2616 are obtained using (i) the Slater determinants consisting of the harmonic oscillator single-particle wave functions (SDHO) and (ii) the relativistic mean-field approach (RMF). Retaining up to the two-body density term in the correlation expansion, the calculations are performed employing the free as well as the in-medium nucleon-nucleon (N N ) scattering amplitude. The in-medium N N amplitude considers the effects arising due to phase variation, higher momentum transfer components, and Pauli blocking. Our main focus in this work is to reveal how could one make the best use of SDHO densities with reference to the RMF one. The results demonstrate that the SDHO densities, along with the in-medium N N amplitude, are able to provide satisfactory explanation of the experimental data. It is found that, except for O,2423, the predicted SDHO matter rms radii of oxygen isotopes closely agree with those obtained using the RMF densities. However, for O,2423, our results require reasonably larger SDHO matter rms radii than the RMF values, thereby predicting thicker neutron skins in 23O and 24O as compared to RMF ones. In conclusion, the results of the present analysis establish the utility of SDHO densities in predicting fairly reliable estimates of the matter rms radii of neutron-rich nuclei.

  9. Point prevalence and incidence of Mycobacterium tuberculosis complex in captive elephants in the United States of America.

    PubMed

    Feldman, Melissa; Isaza, Ramiro; Prins, Cindy; Hernandez, Jorge

    2013-01-01

    Captive elephants infected with tuberculosis are implicated as an occupational source of zoonotic tuberculosis. However, accurate estimates of prevalence and incidence of elephant tuberculosis from well-defined captive populations are lacking in the literature. Studies published in recent years contain a wide range of prevalence estimates calculated from summary data. Incidence estimates of elephant tuberculosis in captive elephants are not available. This study estimated the annual point prevalence, annual incidence, cumulative incidence, and incidence density of tuberculosis in captive elephants within the USA during the past 52 years. We combined existing elephant census records from captive elephants in the USA with tuberculosis culture results obtained from trunk washes or at necropsy. This data set included 15 years where each elephant was screened annually. Between 1960 and 1996, the annual point prevalence of tuberculosis complex mycobacteria for both species was 0. From 1997 through 2011, the median point prevalence within the Asian elephant population was 5.1%, with a range from 0.3% to 6.7%. The incidence density was 9.7 cases/1000 elephant years (95% CI: 7.0-13.4). In contrast, the annual point prevalence during the same time period within the African elephant population remained 0 and the incidence density was 1.5 cases/1000 elephant years (95% CI: 0.7-4.0). The apparent increase in new cases noted after 1996 resulted from a combination of both index cases and the initiation of mandatory annual tuberculosis screening in 1997 for all the elephants. This study found lower annual point prevalence estimates than previously reported in the literature. These discrepancies in prevalence estimates are primarily due to differences in terminology and calculation methods. Using the same intensive testing regime, the incidence of tuberculosis differed significantly between Asian and African elephants. Accurate and species specific knowledge of prevalence and incidence will inform our efforts to mitigate occupational risks associated with captive elephants in the USA.

  10. The appearance and effects of metallic implants in CT images.

    PubMed

    Kairn, T; Crowe, S B; Fogg, P; Trapp, J V

    2013-06-01

    The computed tomography (CT) imaging artefacts that metallic medical implants produce in surrounding tissues are usually contoured and over-ridden during radiotherapy treatment planning. In cases where radiotherapy treatment beams unavoidably pass though implants, it is especially important to understand the imaging artefacts that may occur within the implants themselves. This study examines CT images of a set of simple metallic objects, immersed in water, in order to evaluate reliability and variability of CT numbers (Hounsfield units, HUs) within medical implants. Model implants with a range of sizes (heights from 2.2 to 49.6 mm), electron densities (from 2.3 to 7.7 times the electron density of water) and effective atomic numbers (from 3.9 to 9.0 times the effective atomic number of water in a CT X-ray beam) were created by stacking metal coins from several currencies. These 'implants' were CT scanned within a large (31.0 cm across) and a small (12.8 cm across) water phantom. Resulting HU values are as much as 50 % lower than the result of extrapolating standard electron density calibration data (obtained for tissue and bone densities) up to the metal densities and there is a 6 % difference between the results obtained by scanning with 120 and 140 kVp tube potentials. Profiles through the implants show localised cupping artefacts, within the implants, as well as a gradual decline in HU outside the implants that can cause the implants' sizes to be over estimated by 1.3-9.0 mm. These effects are exacerbated when the implants are scanned in the small phantom or at the side of the large phantom, due to reduced pre-hardening of the X-ray beam in these configurations. These results demonstrate the necessity of over-riding the densities of metallic implants, as well as their artefacts in tissue, in order to obtain accurate radiotherapy dose calculations.

  11. Geostatistical estimation of forest biomass in interior Alaska combining Landsat-derived tree cover, sampled airborne lidar and field observations

    NASA Astrophysics Data System (ADS)

    Babcock, Chad; Finley, Andrew O.; Andersen, Hans-Erik; Pattison, Robert; Cook, Bruce D.; Morton, Douglas C.; Alonzo, Michael; Nelson, Ross; Gregoire, Timothy; Ene, Liviu; Gobakken, Terje; Næsset, Erik

    2018-06-01

    The goal of this research was to develop and examine the performance of a geostatistical coregionalization modeling approach for combining field inventory measurements, strip samples of airborne lidar and Landsat-based remote sensing data products to predict aboveground biomass (AGB) in interior Alaska's Tanana Valley. The proposed modeling strategy facilitates pixel-level mapping of AGB density predictions across the entire spatial domain. Additionally, the coregionalization framework allows for statistically sound estimation of total AGB for arbitrary areal units within the study area---a key advance to support diverse management objectives in interior Alaska. This research focuses on appropriate characterization of prediction uncertainty in the form of posterior predictive coverage intervals and standard deviations. Using the framework detailed here, it is possible to quantify estimation uncertainty for any spatial extent, ranging from pixel-level predictions of AGB density to estimates of AGB stocks for the full domain. The lidar-informed coregionalization models consistently outperformed their counterpart lidar-free models in terms of point-level predictive performance and total AGB precision. Additionally, the inclusion of Landsat-derived forest cover as a covariate further improved estimation precision in regions with lower lidar sampling intensity. Our findings also demonstrate that model-based approaches that do not explicitly account for residual spatial dependence can grossly underestimate uncertainty, resulting in falsely precise estimates of AGB. On the other hand, in a geostatistical setting, residual spatial structure can be modeled within a Bayesian hierarchical framework to obtain statistically defensible assessments of uncertainty for AGB estimates.

  12. The mechanics of unrest at Long Valley caldera, California. 2. Constraining the nature of the source using geodetic and micro-gravity data

    USGS Publications Warehouse

    Battaglia, Maurizio; Segall, P.; Roberts, C.

    2003-01-01

    We model the source of inflation of Long Valley caldera by combining geodetic and micro-gravity data. Uplift from GPS and leveling, two-color EDM measurements, and residual gravity change determinations are used to estimate the intrusion geometry, assuming a vertical prolate ellipsoidal source. The U.S. Geological Survey occupied the Long Valley gravity network six times from 1980 to 1985. We reoccupied this network twice, in the summer of 1998 (33 stations), and the summer of 1999 (37 stations). Before gravity data can be used to estimate the density of the intrusion, they must be corrected for the effect of vertical deformation (the free-air effect) and changes in the water table. We use geostatistical techniques to interpolate uplift and water table changes at the gravity stations. The inflation source (a vertical prolate ellipsoid) is located 5.9 km beneath the resurgent dome with an aspect ratio equal to 0.475, a volume change from 1982 to 1999 of 0.136 km3 and a density of around 1700 kg/m3. A bootstrap method was employed to estimate 95% confidence bounds for the parameters of the inflation model. We obtained a range of 0.105-0.187 km3 for the volume change, and 1180-2330 kg/m3 for the density. Our results do not support hydrothermal fluid intrusion as the primary cause of unrest, and confirm the intrusion of silicic magma beneath Long Valley caldera. Failure to account for the ellipsoidal nature of the source biases the estimated source depth by 2.9 km (a 33% increase), the volume change by 0.019 km3 (a 14% increase) and the density by about 1200 kg/m3 (a 40% increase). ?? 2003 Elsevier B.V. All rights reserved.

  13. Annual Incidence of Snake Bite in Rural Bangladesh

    PubMed Central

    Rahman, Ridwanur; Faiz, M. Abul; Selim, Shahjada; Rahman, Bayzidur; Basher, Ariful; Jones, Alison; d'Este, Catherine; Hossain, Moazzem; Islam, Ziaul; Ahmed, Habib; Milton, Abul Hasnat

    2010-01-01

    Background Snake bite is a neglected public health problem in the world and one of the major causes of mortality and morbidity in many areas, particularly in the rural tropics. It also poses substantial economic burdens on the snake bite victims due to treatment related expenditure and loss of productivity. An accurate estimate of the risk of snake bite is largely unknown for most countries in the developing world, especially South-East Asia. Methodology/Principal Findings We undertook a national epidemiological survey to determine the annual incidence density of snake bite among the rural Bangladeshi population. Information on frequency of snake bite and individuals' length of stay in selected households over the preceding twelve months was rigorously collected from the respondents through an interviewer administered questionnaire. Point estimates and confidence intervals of the incidence density of snake bite, weighted and adjusted for the multi-stage cluster sampling design, were obtained. Out of 18,857 study participants, over one year a total of 98 snake bites, including one death were reported in rural Bangladesh. The estimated incidence density of snake bite is 623.4 / 100,000 person years (95% C I 513.4–789.2 /100,000 person years). Biting occurs mostly when individuals are at work. The majority of the victims (71%) receive snake bites to their lower extremities. Eighty-six percent of the victims received some form of management within two hours of snake bite, although only three percent of the victims went directly to either a medical doctor or a hospital. Conclusions/Significance Incidence density of snake bite in rural Bangladesh is substantially higher than previously estimated. This is likely due to better ascertainment of the incidence through a population based survey. Poor access to health services increases snake bite related morbidity and mortality; therefore, effective public health actions are warranted. PMID:21049056

  14. Spatially explicit models for inference about density in unmarked or partially marked populations

    USGS Publications Warehouse

    Chandler, Richard B.; Royle, J. Andrew

    2013-01-01

    Recently developed spatial capture–recapture (SCR) models represent a major advance over traditional capture–recapture (CR) models because they yield explicit estimates of animal density instead of population size within an unknown area. Furthermore, unlike nonspatial CR methods, SCR models account for heterogeneity in capture probability arising from the juxtaposition of animal activity centers and sample locations. Although the utility of SCR methods is gaining recognition, the requirement that all individuals can be uniquely identified excludes their use in many contexts. In this paper, we develop models for situations in which individual recognition is not possible, thereby allowing SCR concepts to be applied in studies of unmarked or partially marked populations. The data required for our model are spatially referenced counts made on one or more sample occasions at a collection of closely spaced sample units such that individuals can be encountered at multiple locations. Our approach includes a spatial point process for the animal activity centers and uses the spatial correlation in counts as information about the number and location of the activity centers. Camera-traps, hair snares, track plates, sound recordings, and even point counts can yield spatially correlated count data, and thus our model is widely applicable. A simulation study demonstrated that while the posterior mean exhibits frequentist bias on the order of 5–10% in small samples, the posterior mode is an accurate point estimator as long as adequate spatial correlation is present. Marking a subset of the population substantially increases posterior precision and is recommended whenever possible. We applied our model to avian point count data collected on an unmarked population of the northern parula (Parula americana) and obtained a density estimate (posterior mode) of 0.38 (95% CI: 0.19–1.64) birds/ha. Our paper challenges sampling and analytical conventions in ecology by demonstrating that neither spatial independence nor individual recognition is needed to estimate population density—rather, spatial dependence can be informative about individual distribution and density.

  15. A new estimator method for GARCH models

    NASA Astrophysics Data System (ADS)

    Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.

    2007-06-01

    The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.

  16. Modelling detectability of kiore (Rattus exulans) on Aguiguan, Mariana Islands, to inform possible eradication and monitoring efforts

    USGS Publications Warehouse

    Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.

    2011-01-01

    Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.

  17. Sampling in rugged terrain

    USGS Publications Warehouse

    Dawson, D.K.; Ralph, C. John; Scott, J. Michael

    1981-01-01

    Work in rugged terrain poses some unique problems that should be considered before research is initiated. Besides the obvious physical difficulties of crossing uneven terrain, topography can influence the bird species? composition of a forest and the observer's ability to detect birds and estimate distances. Census results can also be affected by the slower rate of travel on rugged terrain. Density figures may be higher than results obtained from censuses in similar habitat on level terrain because of the greater likelihood of double-recording of individuals and of recording species that sing infrequently. In selecting a census technique, the researcher should weigh the efficiency and applicability of a technique for the objectives of his study in light of the added difficulties posed by rugged terrain. The variable circular-plot method is probably the most effective technique for estimating bird numbers. Bird counts and distance estimates are facilitated because the observer is stationary, and calculations of species? densities take into account differences in effective area covered amongst stations due to variability in terrain or vegetation structure. Institution of precautions that minimize the risk of injury to field personnel can often enhance the observer?s ability to detect birds.

  18. Recombination dynamics in In{sub x}Ga{sub 1−x}N quantum wells—Contribution of excited subband recombination to carrier leakage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, T.; Markurt, T.; Albrecht, M.

    2014-11-03

    The recombination dynamics of In{sub x}Ga{sub 1−x}N single quantum wells are investigated. By comparing the photoluminescence (PL) decay spectra with simulated emission spectra obtained by a Schrödinger-Poisson approach, we give evidence that recombination from higher subbands contributes the emission of the quantum well at high excitation densities. This recombination path appears as a shoulder on the high energy side of the spectrum at high charge carrier densities and exhibits decay in the range of ps. Due to the lower confinement of the excited subband states, a distinct proportion of the probability density function lies outside the quantum well, thus contributingmore » to charge carrier loss. By estimating the current density in our time resolved PL experiments, we show that the onset of this loss mechanism occurs in the droop relevant regime above 20 A/cm{sup 2}.« less

  19. Thermophysical properties of heat-treated U-7Mo/Al dispersion fuel

    NASA Astrophysics Data System (ADS)

    Cho, Tae Won; Kim, Yeon Soo; Park, Jong Man; Lee, Kyu Hong; Kim, Sunghwan; Lee, Chong Tak; Yang, Jae Ho; Oh, Jang Soo; Sohn, Dong-Seong

    2018-04-01

    In this study, the effects of interaction layer (IL) on thermophysical properties of U-7Mo/Al dispersion fuel were examined. Microstructural analyses revealed that ILs were formed uniformly on U-Mo particles during heating of U-7Mo/Al samples. The IL volume fraction was measured by applying image analysis methods. The uranium loadings of the samples were calculated based on the measured meat densities at 298 K. The density of the IL was estimated by using the measured density and IL volume fraction. Thermal diffusivity and heat capacity of the samples after the heat treatment were measured as a function of temperature and volume fractions of U-Mo and IL. The thermal conductivity of IL-formed U-7Mo/Al was derived by using the measured thermal diffusivity, heat capacity, and density. The thermal conductivity obtained in the present study was lower than that predicted by the modified Hashin-Shtrikman model due to the theoretical model's inability to consider the thermal resistance at interfaces between the meat constituents.

  20. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

Top