Science.gov

Sample records for accurately estimate excess

  1. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  2. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  3. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  4. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Astrophysics Data System (ADS)

    Wheeler, K.; Knuth, K.; Castle, P.

    2005-12-01

    and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.

  5. ESTIMATING EXCESS DIETARY EXPOSURES OF YOUNG CHILDREN

    EPA Science Inventory

    Nine children in a daycare that routinely applied the pesticide, esfenvalerate, were studied to assess excess dietary exposures. Surface wipes, a standard food item of processed American cheese slice pressed on the surface and handled by the child, an accelerometer reading, and ...

  6. ESTIMATING EXCESS DIETARY EXPOSURES OF YOUNG CHILDREN

    EPA Science Inventory

    Nine children in a daycare that routinely applied the pesticide, esfenvalerate, were studied to assess excess dietary exposures. Surface wipes, a standard food item of processed American cheese slice pressed on the surface and handled by the child, an accelerometer reading, and ...

  7. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  8. Preparing Rapid, Accurate Construction Cost Estimates with a Personal Computer.

    ERIC Educational Resources Information Center

    Gerstel, Sanford M.

    1986-01-01

    An inexpensive and rapid method for preparing accurate cost estimates of construction projects in a university setting, using a personal computer, purchased software, and one estimator, is described. The case against defined estimates, the rapid estimating system, and adjusting standard unit costs are discussed. (MLW)

  9. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  10. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  11. Accurate point spread function (PSF) estimation for coded aperture cameras

    NASA Astrophysics Data System (ADS)

    Yang, Jingyu; Jiang, Bin; Ma, Jinlong; Sun, Yi; Di, Ming

    2014-10-01

    Accurate Point Spread Function (PSF) estimation of coded aperture cameras is a key to deblur defocus images. There are mainly two kinds of approaches to estimate PSF: blind-deconvolution-based methods, and measurement-based methods with point light sources. Both these two kinds of methods cannot provide accurate and convenient PSFs due to the limit of blind deconvolution or imperfection of point light sources. Inaccurate PSF estimation introduces pseudo-ripple and ringing artifacts which influence the effects of image deconvolution. In addition, there are many inconvenient situation for the PSF estimation. This paper proposes a novel method of PSF estimation for coded aperture cameras. It is observed and verified that the spatially-varying point spread functions are well modeled by the convolution of the aperture pattern and Gaussian blurring with appropriate scales and bandwidths. We use the coded aperture camera to capture a point light source to get a rough estimate of the PSF. Then, the PSF estimation method is formulated as the optimization of scale and bandwidth of Gaussian blurring kernel to fit the coded pattern with the observed PSF. We also investigate the PSF estimation at arbitrary distance with a few observed PSF kernels, which allows us to fully characterize the response of coded imaging systems with limited measurements. Experimental results show that our method is able to accurately estimate PSF kernels, which significantly make the deblurring performance convenient.

  12. Estimates of electronic coupling for excess electron transfer in DNA

    NASA Astrophysics Data System (ADS)

    Voityuk, Alexander A.

    2005-07-01

    Electronic coupling Vda is one of the key parameters that determine the rate of charge transfer through DNA. While there have been several computational studies of Vda for hole transfer, estimates of electronic couplings for excess electron transfer (ET) in DNA remain unavailable. In the paper, an efficient strategy is established for calculating the ET matrix elements between base pairs in a π stack. Two approaches are considered. First, we employ the diabatic-state (DS) method in which donor and acceptor are represented with radical anions of the canonical base pairs adenine-thymine (AT) and guanine-cytosine (GC). In this approach, similar values of Vda are obtained with the standard 6-31G* and extended 6-31++G** basis sets. Second, the electronic couplings are derived from lowest unoccupied molecular orbitals (LUMOs) of neutral systems by using the generalized Mulliken-Hush or fragment charge methods. Because the radical-anion states of AT and GC are well reproduced by LUMOs of the neutral base pairs calculated without diffuse functions, the estimated values of Vda are in good agreement with the couplings obtained for radical-anion states using the DS method. However, when the calculation of a neutral stack is carried out with diffuse functions, LUMOs of the system exhibit the dipole-bound character and cannot be used for estimating electronic couplings. Our calculations suggest that the ET matrix elements Vda for models containing intrastrand thymine and cytosine bases are essentially larger than the couplings in complexes with interstrand pyrimidine bases. The matrix elements for excess electron transfer are found to be considerably smaller than the corresponding values for hole transfer and to be very responsive to structural changes in a DNA stack.

  13. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    PubMed

    Xia, Li C; Cram, Jacob A; Chen, Ting; Fuhrman, Jed A; Sun, Fengzhu

    2011-01-01

    Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy) by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy). GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets) in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based) even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  14. Accurate absolute GPS positioning through satellite clock error estimation

    NASA Astrophysics Data System (ADS)

    Han, S.-C.; Kwon, J. H.; Jekeli, C.

    2001-05-01

    An algorithm for very accurate absolute positioning through Global Positioning System (GPS) satellite clock estimation has been developed. Using International GPS Service (IGS) precise orbits and measurements, GPS clock errors were estimated at 30-s intervals. Compared to values determined by the Jet Propulsion Laboratory, the agreement was at the level of about 0.1 ns (3 cm). The clock error estimates were then applied to an absolute positioning algorithm in both static and kinematic modes. For the static case, an IGS station was selected and the coordinates were estimated every 30 s. The estimated absolute position coordinates and the known values had a mean difference of up to 18 cm with standard deviation less than 2 cm. For the kinematic case, data obtained every second from a GPS buoy were tested and the result from the absolute positioning was compared to a differential GPS (DGPS) solution. The mean differences between the coordinates estimated by the two methods are less than 40 cm and the standard deviations are less than 25 cm. It was verified that this poorer standard deviation on 1-s position results is due to the clock error interpolation from 30-s estimates with Selective Availability (SA). After SA was turned off, higher-rate clock error estimates (such as 1 s) could be obtained by a simple interpolation with negligible corruption. Therefore, the proposed absolute positioning technique can be used to within a few centimeters' precision at any rate by estimating 30-s satellite clock errors and interpolating them.

  15. Accounting for reporting fatigue is required to accurately estimate incidence in voluntary reporting health schemes.

    PubMed

    Gittins, Matthew; McNamee, Roseanne; Holland, Fiona; Carter, Lesley-Anne

    2017-01-01

    Accurate estimation of the true incidence of ill-health is a goal of many surveillance systems. In surveillance schemes including zero reporting to remove ambiguity with nonresponse, reporter fatigue might increase the likelihood of a false zero case report in turn underestimating the true incidence rate and creating a biased downward trend over time. Multilevel zero-inflated negative binomial models were fitted to incidence case reports of three surveillance schemes running between 1996 and 2012 in the United Kingdom. Estimates of the true annual incidence rates were produced by weighting the reported number of cases by the predicted excess zero rate in addition to the within-scheme standard adjustment for response rate and the participation rate. Time since joining the scheme was associated with the odds of excess zero case reports for most schemes, resulting in weaker calendar trends. Estimated incidence rates (95% confidence interval) per 100,000 person years, were approximately doubled to 30 (21-39), 137 (116-157), 33 (27-39), when excess zero-rate adjustment was applied. If we accept that excess zeros are in reality nonresponse by busy reporters, then usual estimates of incidence are likely to be significantly underestimated and previously thought strong downward trends overestimated. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  17. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  18. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  19. Towards SI-traceable radio occultation excess phase processing with integrated uncertainty estimation for climate applications

    NASA Astrophysics Data System (ADS)

    Innerkofler, Josef; Pock, Christian; Kirchengast, Gottfried; Schwaerz, Marc; Jaeggi, Adrian; Schwarz, Jakob

    2016-04-01

    The GNSS Radio Occultation (RO) measurement technique is highly valuable for climate monitoring of the atmosphere as it provides accurate and precise measurements in the troposphere and stratosphere regions with global coverage, long-term stability, and virtually all-weather capability. The novel Reference Occultation Processing System (rOPS), currently under development at the WEGC at University of Graz aims to process raw RO measurements into essential climate variables, such as temperature, pressure, and tropospheric water vapor, in a way which is SI-traceable to the universal time standard and which includes rigorous uncertainty propagation. As part of this rOPS climate-quality processing system, accurate atmospheric excess phase profiles with new approaches integrating uncertainty propagation are derived from the raw occultation tracking data and orbit data. Regarding the latter, highly accurate orbit positions and velocities of the GNSS transmitter satellites and the RO receiver satellites in low Earth orbit (LEO) need to be determined, in order to enable high accuracy of the excess phase profiles. Using several representative test days of GPS orbit data from the CODE and IGS archives, which are available at accuracies of about 3 cm (position) / 0.03 mm/s (velocity), and employing Bernese 5.2 and Napeos 3.3.1 software packages for the LEO orbit determination of the CHAMP, GRACE, and MetOp RO satellites, we achieved robust SI-traced LEO orbit uncertainty estimates of about 5 cm (position) / 0.05 mm/s (velocity) for the daily orbits, including estimates of systematic uncertainty bounds and of propagated random uncertainties. For COSMIC RO satellites, we found decreased accuracy estimates near 10-15 cm (position) / 0.1-0.15 mm/s (velocity), since the characteristics of the small COSMIC satellite platforms and antennas provide somewhat less favorable orbit determination conditions. We present the setup of how we (I) used the Bernese and Napeos package in mutual

  20. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  1. Towards accurate and precise estimates of lion density.

    PubMed

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2016-12-13

    Reliable estimates of animal density are fundamental to our understanding of ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation biology since wildlife authorities rely on these figures to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging species such as carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores. African lions (Panthera leo) provide an excellent example as although abundance indices have been shown to produce poor inferences, they continue to be used to estimate lion density and inform management and policy. In this study we adapt a Bayesian spatially explicit capture-recapture model to estimate lion density in the Maasai Mara National Reserve (MMNR) and surrounding conservancies in Kenya. We utilize sightings data from a three-month survey period to produce statistically rigorous spatial density estimates. Overall posterior mean lion density was estimated to be 16.85 (posterior standard deviation = 1.30) lions over one year of age per 100km(2) with a sex ratio of 2.2♀:1♂. We argue that such methods should be developed, improved and favored over less reliable methods such as track and call-up surveys. We caution against trend analyses based on surveys of differing reliability and call for a unified framework to assess lion numbers across their range in order for better informed management and policy decisions to be made. This article is protected by copyright. All rights reserved.

  2. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  3. Accurate estimators of correlation functions in Fourier space

    NASA Astrophysics Data System (ADS)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  4. Accurate age estimation in small-scale societies

    PubMed Central

    Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.

    2017-01-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282

  5. How utilities can achieve more accurate decommissioning cost estimates

    SciTech Connect

    Knight, R.

    1999-07-01

    The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost

  6. Accurate bolus arrival time estimation using piecewise linear model fitting

    NASA Astrophysics Data System (ADS)

    Abdou, Elhassan; de Mey, Johan; De Ridder, Mark; Vandemeulebroucke, Jef

    2017-02-01

    Dynamic contrast-enhanced computed tomography (DCE-CT) is an emerging radiological technique, which consists in acquiring a rapid sequence of CT images, shortly after the injection of an intravenous contrast agent. The passage of the contrast agent in a tissue results in a varying CT intensity over time, recorded in time-attenuation curves (TACs), which can be related to the contrast supplied to that tissue via the supplying artery to estimate the local perfusion and permeability characteristics. The time delay between the arrival of the contrast bolus in the feeding artery and the tissue of interest, called the bolus arrival time (BAT), needs to be determined accurately to enable reliable perfusion analysis. Its automated identification is however highly sensitive to noise. We propose an accurate and efficient method for estimating the BAT from DCE-CT images. The method relies on a piecewise linear TAC model with four segments and suitable parameter constraints for limiting the range of possible values. The model is fitted to the acquired TACs in a multiresolution fashion using an iterative optimization approach. The performance of the method was evaluated on simulated and real perfusion data of lung and rectum tumours. In both cases, the method was found to be stable, leading to average accuracies in the order of the temporal resolution of the dynamic sequence. For reasonable levels of noise, the results were found to be comparable to those obtained using a previously proposed method, employing a full search algorithm, but requiring an order of magnitude more computation time.

  7. Efficient floating diffuse functions for accurate characterization of the surface-bound excess electrons in water cluster anions.

    PubMed

    Zhang, Changzhe; Bu, Yuxiang

    2017-01-25

    In this work, the effect of diffuse function types (atom-centered diffuse functions versus floating functions and s-type versus p-type diffuse functions) on the structures and properties of three representative water cluster anions featuring a surface-bound excess electron is studied and we find that an effective combination of such two kinds of diffuse functions can not only reduce the computational cost but also, most importantly, considerably improve the accuracy of results and even avoid incorrect predictions of spectra and the EE shape. Our results indicate that (a) simple augmentation of atom-centered diffuse functions is beneficial for the vertical detachment energy convergence, but it leads to very poor descriptions for the singly occupied molecular orbital (SOMO) and lowest unoccupied molecular orbital (LUMO) distributions of the water cluster anions featuring a surface-bound excess electron and thus a significant ultraviolet spectrum redshift; (b) the ghost-atom-based floating diffuse functions can not only contribute to accurate electronic calculations of the ground state but also avoid poor and even incorrect descriptions of the SOMO and the LUMO induced by excessive augmentation of atom-centered diffuse functions; (c) the floating functions can be realized by ghost atoms and their positions could be determined through an optimization routine along the dipole moment vector direction. In addition, both the s- and p-type floating functions are necessary to supplement in the basis set which are responsible for the ground (s-type character) and excited (p-type character) states of the surface-bound excess electron, respectively. The exponents of the diffuse functions should also be determined to make the diffuse functions cover the main region of the excess electron distribution. Note that excessive augmentation of such diffuse functions is redundant and even can lead to unreasonable LUMO characteristics.

  8. Estimation of bone permeability using accurate microstructural measurements.

    PubMed

    Beno, Thoma; Yoon, Young-June; Cowin, Stephen C; Fritton, Susannah P

    2006-01-01

    While interstitial fluid flow is necessary for the viability of osteocytes, it is also believed to play a role in bone's mechanosensory system by shearing bone cell membranes or causing cytoskeleton deformation and thus activating biochemical responses that lead to the process of bone adaptation. However, the fluid flow properties that regulate bone's adaptive response are poorly understood. In this paper, we present an analytical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity in bone. First, we estimate the total number of canaliculi emanating from each osteocyte lacuna based on published measurements from parallel-fibered shaft bones of several species (chick, rabbit, bovine, horse, dog, and human). Next, we determine the local three-dimensional permeability of the lacunar-canalicular porosity for these species using recent microstructural measurements and adapting a previously developed model. Results demonstrated that the number of canaliculi per osteocyte lacuna ranged from 41 for human to 115 for horse. Permeability coefficients were found to be different in three local principal directions, indicating local orthotropic symmetry of bone permeability in parallel-fibered cortical bone for all species examined. For the range of parameters investigated, the local lacunar-canalicular permeability varied more than three orders of magnitude, with the osteocyte lacunar shape and size along with the 3-D canalicular distribution determining the degree of anisotropy of the local permeability. This two-step theoretical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity will be useful for accurate quantification of interstitial fluid movement in bone.

  9. Infiltration-Excess Overland Flow Estimated by TOPMODEL for the Conterminous United States

    USGS Publications Warehouse

    Wolock, David M.

    2003-01-01

    This 5-kilometer resolution raster (grid) dataset for the conterminous United States represents the average percentage of infiltration-excess overland flow in total streamflow estimated by the watershed model TOPMODEL. Infiltration-excess overland flow is simulated in TOPMODEL as precipitation that exceeds the infiltration capacity of the soil and enters the stream channel.

  10. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  11. Experimental Demonstration of a Cheap and Accurate Phase Estimation

    NASA Astrophysics Data System (ADS)

    Rudinger, Kenneth; Kimmel, Shelby; Lobser, Daniel; Maunz, Peter

    2017-05-01

    We demonstrate an experimental implementation of robust phase estimation (RPE) to learn the phase of a single-qubit rotation on a trapped Yb+ ion qubit. We show this phase can be estimated with an uncertainty below 4 ×10-4 rad using as few as 176 total experimental samples, and our estimates exhibit Heisenberg scaling. Unlike standard phase estimation protocols, RPE neither assumes perfect state preparation and measurement, nor requires access to ancillae. We crossvalidate the results of RPE with the more resource-intensive protocol of gate set tomography.

  12. Experimental Demonstration of a Cheap and Accurate Phase Estimation

    DOE PAGES

    Rudinger, Kenneth; Kimmel, Shelby; Lobser, Daniel; ...

    2017-05-11

    We demonstrate an experimental implementation of robust phase estimation (RPE) to learn the phase of a single-qubit rotation on a trapped Yb+ ion qubit. Here, we show this phase can be estimated with an uncertainty below 4 × 10-4 rad using as few as 176 total experimental samples, and our estimates exhibit Heisenberg scaling. Unlike standard phase estimation protocols, RPE neither assumes perfect state preparation and measurement, nor requires access to ancillae. We crossvalidate the results of RPE with the more resource-intensive protocol of gate set tomography.

  13. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  14. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    USDA-ARS?s Scientific Manuscript database

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  15. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  16. Using inpainting to construct accurate cut-sky CMB estimators

    NASA Astrophysics Data System (ADS)

    Gruetjen, H. F.; Fergusson, J. R.; Liguori, M.; Shellard, E. P. S.

    2017-02-01

    The direct evaluation of manifestly optimal, cut-sky cosmic microwave background (CMB) power spectrum and bispectrum estimators is numerically very costly, due to the presence of inverse-covariance filtering operations. This justifies the investigation of alternative approaches. In this work, we mostly focus on an inpainting algorithm that was introduced in recent CMB analyses to cure cut-sky suboptimalities of bispectrum estimators. First, we show that inpainting can equally be applied to the problem of unbiased estimation of power spectra. We then compare the performance of a novel inpainted CMB temperature power spectrum estimator to the popular apodized pseudo-Cl (PCL) method and demonstrate, both numerically and with analytic arguments, that inpainted power spectrum estimates significantly outperform PCL estimates. Finally, we study the case of cut-sky bispectrum estimators, comparing the performance of three different approaches: inpainting, apodization and a novel low-l leaning scheme. Providing an analytic argument of why the local shape is typically most affected we mainly focus on local-type non-Gaussianity. Our results show that inpainting allows us to achieve optimality also for bispectrum estimation, but interestingly also demonstrate that appropriate apodization, in conjunction with low-l cleaning, can lead to comparable accuracy.

  17. Accurate estimation of solvation free energy using polynomial fitting techniques.

    PubMed

    Shyu, Conrad; Ytreberg, F Marty

    2011-01-15

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem, 2009, 30, 2297). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and nonequidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest that these polynomial techniques, especially with use of nonequidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. Copyright © 2010 Wiley Periodicals, Inc.

  18. 49 CFR Appendix G to Part 222 - Excess Risk Estimates for Public Highway-Rail Grade Crossings

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Excess Risk Estimates for Public Highway-Rail... HIGHWAY-RAIL GRADE CROSSINGS Pt. 222, App. G Appendix G to Part 222—Excess Risk Estimates for Public Highway-Rail Grade Crossings Ban Effects/Train Horn Effectiveness Warning type Excess risk estimate Nation...

  19. 49 CFR Appendix G to Part 222 - Excess Risk Estimates for Public Highway-Rail Grade Crossings

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Excess Risk Estimates for Public Highway-Rail... HIGHWAY-RAIL GRADE CROSSINGS Pt. 222, App. G Appendix G to Part 222—Excess Risk Estimates for Public Highway-Rail Grade Crossings Ban Effects/Train Horn Effectiveness Warning type Excess risk estimate Nation...

  20. How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?

    NASA Astrophysics Data System (ADS)

    Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.

    2002-12-01

    The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in

  1. Accurate feature detection and estimation using nonlinear and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Rudin, Leonid; Osher, Stanley

    1994-11-01

    A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.

  2. Fast and Accurate Estimates of Divergence Times from Big Data.

    PubMed

    Mello, Beatriz; Tao, Qiqing; Tamura, Koichiro; Kumar, Sudhir

    2017-01-01

    Ongoing advances in sequencing technology have led to an explosive expansion in the molecular data available for building increasingly larger and more comprehensive timetrees. However, Bayesian relaxed-clock approaches frequently used to infer these timetrees impose a large computational burden and discourage critical assessment of the robustness of inferred times to model assumptions, influence of calibrations, and selection of optimal data subsets. We analyzed eight large, recently published, empirical datasets to compare time estimates produced by RelTime (a non-Bayesian method) with those reported by using Bayesian approaches. We find that RelTime estimates are very similar to Bayesian approaches, yet RelTime requires orders of magnitude less computational time. This means that the use of RelTime will enable greater rigor in molecular dating, because faster computational speeds encourage more extensive testing of the robustness of inferred timetrees to prior assumptions (models and calibrations) and data subsets. Thus, RelTime provides a reliable and computationally thrifty approach for dating the tree of life using large-scale molecular datasets.

  3. Accurate tempo estimation based on harmonic + noise decomposition

    NASA Astrophysics Data System (ADS)

    Alonso, Miguel; Richard, Gael; David, Bertrand

    2006-12-01

    We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.

  4. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    USGS Publications Warehouse

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  5. Bioaccessibility tests accurately estimate bioavailability of lead to quail.

    PubMed

    Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S

    2016-09-01

    Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of

  6. 49 CFR Appendix G to Part 222 - Excess Risk Estimates for Public Highway-Rail Grade Crossings

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Excess Risk Estimates for Public Highway-Rail... HIGHWAY-RAIL GRADE CROSSINGS Pt. 222, App. G Appendix G to Part 222—Excess Risk Estimates for Public Highway-Rail Grade Crossings Ban Effects/Train Horn Effectiveness [Summary table] Warning type Excess risk...

  7. Revised estimates of influenza-associated excess mortality, United States, 1995 through 2005

    PubMed Central

    Foppa, Ivo M; Hossain, Md Monir

    2008-01-01

    Background Excess mortality due to seasonal influenza is thought to be substantial. However, influenza may often not be recognized as cause of death. Imputation methods are therefore required to assess the public health impact of influenza. The purpose of this study was to obtain estimates of monthly excess mortality due to influenza that are based on an epidemiologically meaningful model. Methods and Results U.S. monthly all-cause mortality, 1995 through 2005, was hierarchically modeled as Poisson variable with a mean that linearly depends both on seasonal covariates and on influenza-certified mortality. It also allowed for overdispersion to account for extra variation that is not captured by the Poisson error. The coefficient associated with influenza-certified mortality was interpreted as ratio of total influenza mortality to influenza-certified mortality. Separate models were fitted for four age categories (<18, 18–49, 50–64, 65+). Bayesian parameter estimation was performed using Markov Chain Monte Carlo methods. For the eleven year study period, a total of 260,814 (95% CI: 201,011–290,556) deaths was attributed to influenza, corresponding to an annual average of 23,710, or 0.91% of all deaths. Conclusion Annual estimates for influenza mortality were highly variable from year to year, but they were systematically lower than previously published estimates. The excellent fit of our model with the data suggest validity of our estimates. PMID:19116016

  8. 49 CFR Appendix G to Part 222 - Excess Risk Estimates for Public Highway-Rail Grade Crossings

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Excess Risk Estimates for Public Highway-Rail Grade Crossings G Appendix G to Part 222 Transportation Other Regulations Relating to Transportation... HIGHWAY-RAIL GRADE CROSSINGS Pt. 222, App. G Appendix G to Part 222—Excess Risk Estimates for Public...

  9. [Estimation of the excess of lung cancer mortality risk associated to environmental tobacco smoke exposure of hospitality workers].

    PubMed

    López, M José; Nebot, Manel; Juárez, Olga; Ariza, Carles; Salles, Joan; Serrahima, Eulàlia

    2006-01-14

    To estimate the excess lung cancer mortality risk associated with environmental tobacco (ETS) smoke exposure among hospitality workers. The estimation was done using objective measures in several hospitality settings in Barcelona. Vapour phase nicotine was measured in several hospitality settings. These measurements were used to estimate the excess lung cancer mortality risk associated with ETS exposure for a 40 year working life, using the formula developed by Repace and Lowrey. Excess lung cancer mortality risk associated with ETS exposure was higher than 145 deaths per 100,000 workers in all places studied, except for cafeterias in hospitals, where excess lung cancer mortality risk was 22 per 100,000. In discoteques, for comparison, excess lung cancer mortality risk is 1,733 deaths per 100,000 workers. Hospitality workers are exposed to ETS levels related to a very high excess lung cancer mortality risk. These data confirm that ETS control measures are needed to protect hospital workers.

  10. Comparison of 2 methods for estimating the prevalences of inadequate and excessive iodine intakes123

    PubMed Central

    Trumbo, Paula R; Spungen, Judith H; Dwyer, Johanna T; Carriquiry, Alicia L; Zimmerman, Thea P; Swanson, Christine A; Murphy, Suzanne P

    2016-01-01

    Background: Prevalences of iodine inadequacy and excess are usually evaluated by comparing the population distribution of urinary iodine concentration (UIC) in spot samples with established UIC cutoffs. To our knowledge, until now, dietary intake data have not been assessed for this purpose. Objective: Our objective was to compare 2 methods for evaluating the prevalence of iodine inadequacy and excess in sex- and life stage–specific subgroups of the US population: one that uses UIC cutoffs, and one that uses iodine intake cutoffs. Design: By using the iodine concentrations of foods measured in the US Food and Drug Administration’s Total Diet Study (TDS), dietary intake data from the NHANES 2003–2010, and a file that maps each NHANES food to a TDS food with similar ingredients, we estimated each NHANES participant’s iodine intake from each NHANES food as the mean iodine concentration of the corresponding TDS food in samples gathered over the same 2-y period. We calculated prevalences of iodine inadequacy and excess in each sex- and life stage–specific subgroup by both the UIC cutoff method and the iodine intake cutoff method—using the UIC values and dietary intakes reported for NHANES participants who provided both types of data—and compared the prevalences across methods. Results: We found lower prevalences of iodine inadequacy across all sex- and life stage–specific subgroups with the iodine intake cutoff method than with the UIC cutoff method; for pregnant females, the respective prevalences were 5.0% and 37.9%. For children aged ≤8 y, the prevalence of excessive iodine intake was high by either method. Conclusions: The consideration of dietary iodine intake from all sources may provide a more complete understanding of population prevalences of iodine inadequacy and excess and thus better inform dietary guidance than consideration of UIC alone. Methods of adjusting UIC for within-person variation are needed to improve the accuracy of prevalence

  11. Comparison of 2 methods for estimating the prevalences of inadequate and excessive iodine intakes.

    PubMed

    Juan, WenYen; Trumbo, Paula R; Spungen, Judith H; Dwyer, Johanna T; Carriquiry, Alicia L; Zimmerman, Thea P; Swanson, Christine A; Murphy, Suzanne P

    2016-09-01

    Prevalences of iodine inadequacy and excess are usually evaluated by comparing the population distribution of urinary iodine concentration (UIC) in spot samples with established UIC cutoffs. To our knowledge, until now, dietary intake data have not been assessed for this purpose. Our objective was to compare 2 methods for evaluating the prevalence of iodine inadequacy and excess in sex- and life stage-specific subgroups of the US population: one that uses UIC cutoffs, and one that uses iodine intake cutoffs. By using the iodine concentrations of foods measured in the US Food and Drug Administration's Total Diet Study (TDS), dietary intake data from the NHANES 2003-2010, and a file that maps each NHANES food to a TDS food with similar ingredients, we estimated each NHANES participant's iodine intake from each NHANES food as the mean iodine concentration of the corresponding TDS food in samples gathered over the same 2-y period. We calculated prevalences of iodine inadequacy and excess in each sex- and life stage-specific subgroup by both the UIC cutoff method and the iodine intake cutoff method-using the UIC values and dietary intakes reported for NHANES participants who provided both types of data-and compared the prevalences across methods. We found lower prevalences of iodine inadequacy across all sex- and life stage-specific subgroups with the iodine intake cutoff method than with the UIC cutoff method; for pregnant females, the respective prevalences were 5.0% and 37.9%. For children aged ≤8 y, the prevalence of excessive iodine intake was high by either method. The consideration of dietary iodine intake from all sources may provide a more complete understanding of population prevalences of iodine inadequacy and excess and thus better inform dietary guidance than consideration of UIC alone. Methods of adjusting UIC for within-person variation are needed to improve the accuracy of prevalence assessments based on UIC. © 2016 American Society for Nutrition.

  12. Parental and Child Factors Associated with Under-Estimation of Children with Excess Weight in Spain.

    PubMed

    de Ruiter, Ingrid; Olmedo-Requena, Rocío; Jiménez-Moleón, José Juan

    2017-07-10

    Objective Understanding obesity misperception and associated factors can improve strategies to increase obesity identification and intervention. We investigate underestimation of child excess weight with a broader perspective, incorporating perceptions, views, and psychosocial aspects associated with obesity. Methods This study used cross-sectional data from the Spanish National Health Survey in 2011-2012 for children aged 2-14 years who are overweight or obese. Percentages of parental misperceived excess weight were calculated. Crude and adjusted analyses were performed for both child and parental factors analyzing associations with underestimation. Results Two-five year olds have the highest prevalence of misperceived overweight or obesity around 90%. In the 10-14 year old age group approximately 63% of overweight teens were misperceived as normal weight and 35.7 and 40% of obese males and females. Child gender did not affect underestimation, whereas a younger age did. Aspects of child social and mental health were associated with under-estimation, as was short sleep duration. Exercise, weekend TV and videogames, and food habits had no effect on underestimation. Fathers were more likely to misperceive their child´s weight status; however parent's age had no effect. Smokers and parents with excess weight were less likely to misperceive their child´s weight status. Parents being on a diet also decreased odds of underestimation. Conclusions for practice This study identifies some characteristics of both parents and children which are associated with under-estimation of child excess weight. These characteristics can be used for consideration in primary care, prevention strategies and for further research.

  13. Accurate Non-parametric Estimation of Recent Effective Population Size from Segments of Identity by Descent.

    PubMed

    Browning, Sharon R; Browning, Brian L

    2015-09-03

    Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package.

  14. Accurate Non-parametric Estimation of Recent Effective Population Size from Segments of Identity by Descent

    PubMed Central

    Browning, Sharon R.; Browning, Brian L.

    2015-01-01

    Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365

  15. Estimation of potential excess cancer incidence in pediatric 201Tl imaging.

    PubMed

    Kaste, Sue C; Waszilycsak, George L; McCarville, M Beth; Daw, Najat C

    2010-01-01

    Little information is available regarding doses of ionizing radiation from medical imaging in the growing population of children undergoing therapy for cancer who are at risk of developing second cancers. The purpose of our study was to estimate the potential excess lifetime cancer incidence and mortality associated with thallium bone imaging in pediatric patients. We retrospectively reviewed the medical records of pediatric patients treated between August 1991 and December 2003 for newly diagnosed osteosarcoma who underwent 201Tl imaging as part of the treatment protocol. According to age at diagnosis and doses of 201Tl, we estimated the excess cancer incidence and cancer mortality for boys and girls at 5 and 15 years old. The study cohort consisted of 73 patients, 32 males (median age at diagnosis, 14.8 years; age range, 8.1-20.1 years) and 41 females (median age at diagnosis, 13.3 years; age range, 6.0-20.7 years). Patients underwent a total of three 201Tl studies with a median dose of 4.4 mCi (162.8 MBq) (range, 2.2-8.4 mCi [81.4-310.8 MBq]) per study. Total median cumulative patient radiation dose for 201Tl studies was 18.6 rem (186 mSv) (range, 8.4-44.2 rem [84-442 mSv]) for males and 21.5 rem (215 mSv) (range, 7.0-43.8 rem [70-438 mSv]) for females. Estimated excess cancer incidence was 6.0 per 100 (male) and 13.0 per 100 (female) if exposed by 5 years of age; 2.0 per 100 (male) and 3.1 per 100 (female) by 15 years of age. Estimated excess cancer mortality was 3.0 per 100 for males and 5.2 per 100 for females at 5 years of age; 1.0 per 100 (male) and 1.4 per 100 (female) exposed at 15 years of age. Further reduction of doses in younger patients is needed to consider 201Tl a viable option for imaging osteosarcoma.

  16. LSimpute: accurate estimation of missing values in microarray data with least squares methods.

    PubMed

    Bø, Trond Hellem; Dysvik, Bjarte; Jonassen, Inge

    2004-02-20

    Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as

  17. Simultaneous transmission of accurate time and stable frequency through bidirectional channel over telecommunication infrastructure with excessive spans

    NASA Astrophysics Data System (ADS)

    Vojtech, Josef; Smotlacha, Vladimir; Skoda, Pavel

    2015-09-01

    In this paper, we present simultaneous transmission of accurate time and stable frequency over 306 km long fiber link. The fiber link belongs to the Time and Frequency infrastructure that is being gradually developed and which shares fiber footprint with data network. The link had been originally deployed with wavelength division multiplexing systems for C and L band systems. But it has been recently upgraded to support 800 GHz wide super-channel with single signal path for both directions. This bidirectional super-channel spans over two extensive segments with attenuation of 28 and 25 dB.

  18. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  19. Estimating the binary fraction of central stars of planetary nebulae using the infrared excess method

    NASA Astrophysics Data System (ADS)

    Douchin, D.; De Marco, O.; Frew, D. J.; Jacoby, G. H.; Fitzgerald, M.; Jasniewicz, G.; Moe, M.; Passy, J. C.; Hillwig, T.; Harmer, D.

    2014-04-01

    There is no quantitative theory to explain why a high 80% of all planetary nebulae are non-spherical. The Binary Hypothesis states that a companion to the progenitor of a central star of planetary nebula is required to shape nebulae whose shapes are not spherical or mildly elliptical, implying that many single post-AGB stars do not make a PN at all. A way to test this hypothesis is to estimate the binary fraction of central stars of planetary nebula and to compare it with that of the main sequence population. Preliminary results from the infrared excess technique indicate that the binary fraction of central stars of planetary nebula is higher than that of the main sequence, implying that PNe could preferentially form via a binary channel. I will present new results from a search of red and infrared flux excess in an extended sample of central stars of planetary nebula and compare the improved estimate of the PN binary fraction with that of main sequence stars.

  20. Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.

    ERIC Educational Resources Information Center

    Algina, James; Moulder, Bradley C.; Moser, Barry K.

    2002-01-01

    Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…

  1. Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.

    ERIC Educational Resources Information Center

    Algina, James; Moulder, Bradley C.; Moser, Barry K.

    2002-01-01

    Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…

  2. Children can accurately monitor and control their number-line estimation performance.

    PubMed

    Wall, Jenna L; Thompson, Clarissa A; Dunlosky, John; Merriman, William E

    2016-10-01

    Accurate monitoring and control are essential for effective self-regulated learning. These metacognitive abilities may be particularly important for developing math skills, such as when children are deciding whether a math task is difficult or whether they made a mistake on a particular item. The present experiments investigate children's ability to monitor and control their math performance. Experiment 1 assessed task- and item-level monitoring while children performed a number line estimation task. Children in 1st, 2nd, and 4th grade (N = 59) estimated the location of numbers on small- and large-scale number lines and judged their confidence in each estimate. Consistent with their performance, children were more confident in their small-scale estimates than their large-scale estimates. Experiments 2 (N = 54) and 3 (N = 85) replicated this finding in new samples of 1st, 2nd, and 4th graders and assessed task- and item-level control. When asked which estimates they wanted the experimenter to evaluate for a reward, children tended to select estimates associated with lower error and higher confidence. Thus, children can accurately monitor their performance during number line estimation and use their monitoring to control their subsequent performance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?

    PubMed

    Hey, Spencer Phillips; Kimmelman, Jonathan

    2016-10-01

    Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach.

  4. Towards an accurate estimation of the isosteric heat of adsorption - A correlation with the potential theory.

    PubMed

    Askalany, Ahmed A; Saha, Bidyut B

    2017-03-15

    Accurate estimation of the isosteric heat of adsorption is mandatory for a good modeling of adsorption processes. In this paper a thermodynamic formalism on adsorbed phase volume which is a function of adsorption pressure and temperature has been proposed for the precise estimation of the isosteric heat of adsorption. The estimated isosteric heat of adsorption using the new correlation has been compared with measured values of prudently selected several adsorbent-refrigerant pairs from open literature. Results showed that the proposed isosteric heat of adsorption correlation fits the experimentally measured values better than the Clausius-Clapeyron equation.

  5. Requirements for accurate estimation of anisotropic material parameters by magnetic resonance elastography: A computational study.

    PubMed

    Tweten, D J; Okamoto, R J; Bayly, P V

    2017-01-17

    To establish the essential requirements for characterization of a transversely isotropic material by magnetic resonance elastography (MRE). Three methods for characterizing nearly incompressible, transversely isotropic (ITI) materials were used to analyze data from closed-form expressions for traveling waves, finite-element (FE) simulations of waves in homogeneous ITI material, and FE simulations of waves in heterogeneous material. Key properties are the complex shear modulus μ2 , shear anisotropy ϕ=μ1/μ2-1, and tensile anisotropy ζ=E1/E2-1. Each method provided good estimates of ITI parameters when both slow and fast shear waves with multiple propagation directions were present. No method gave accurate estimates when the displacement field contained only slow shear waves, only fast shear waves, or waves with only a single propagation direction. Methods based on directional filtering are robust to noise and include explicit checks of propagation and polarization. Curl-based methods led to more accurate estimates in low noise conditions. Parameter estimation in heterogeneous materials is challenging for all methods. Multiple shear waves, both slow and fast, with different propagation directions, must be present in the displacement field for accurate parameter estimates in ITI materials. Experimental design and data analysis can ensure that these requirements are met. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Modified transfer path analysis considering transmissibility functions for accurate estimation of vibration source

    NASA Astrophysics Data System (ADS)

    Kim, Ba-Leum; Jung, Jin-Young; Oh, Il-Kwon

    2017-06-01

    In this study, we developed a modified transfer path analysis (MTPA) method to more accurately estimate the operational force of the main vibration source in a complicated system subjected to multiple vibration sources, base excitation and several disturbances. In the proposed method, transmissibility functions are adopted to compensate the disturbances due to base excitation or to reject transferred forces from other vibration sources. The MTPA method was verified numerically using a simple beam model and was practically applied to estimate vibration forces of a compressor in an outdoor air conditioner unit. Present results show that the MTPA method is feasible for predicting the pure operation forces of the compressor in an outdoor air conditioner unit regardless of the vibration sources due to a rotating fan and base excitations. The proposed MTPA method has very important advantages that it can be used to more accurately estimates the operational force of the main vibration source, properly rejecting other vibration sources and disturbances.

  7. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  8. [A proposal for a new definition of excess mortality associated with influenza-epidemics and its estimation].

    PubMed

    Takahashi, M; Tango, T

    2001-05-01

    As methods for estimating excess mortality associated with influenza-epidemic, the Serfling's cyclical regression model and the Kawai and Fukutomi model with seasonal indices have been proposed. Excess mortality under the old definition (i.e., the number of deaths actually recorded in excess of the number expected on the basis of past seasonal experience) covers the random error for that portion of variation regarded as due to chance. In addition, it disregards the range of random variation of mortality with the season. In this paper, we propose a new definition of excess mortality associated with influenza-epidemics and a new estimation method, considering these questions with the Kawai and Fukutomi method. The new definition of excess mortality and a novel method for its estimation were generated as follows. Factors bringing about variation in mortality in months with influenza-epidemics may be divided into two groups: 1. Influenza itself, 2. others (practically random variation). The range of variation of mortality due to the latter (normal range) can be estimated from the range for months in the absence of influenza-epidemics. Excess mortality is defined as death over the normal range. A new definition of excess mortality associated with influenza-epidemics and an estimation method are proposed. The new method considers variation in mortality in months in the absence of influenza-epidemics. Consequently, it provides reasonable estimates of excess mortality by separating the portion of random variation. Further, it is a characteristic that the proposed estimate can be used as a criterion of statistical significance test.

  9. [Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER Statement].

    PubMed

    Stevens, Gretchen A; Alkema, Leontine; Black, Robert E; Boerma, J Ties; Collins, Gary S; Ezzati, Majid; Grove, John T; Hogan, Daniel R; Hogan, Margaret C; Horton, Richard; Lawn, Joy E; Marušic, Ana; Mathers, Colin D; Murray, Christopher J L; Rudan, Igor; Salomon, Joshua A; Simpson, Paul J; Vos, Theo; Welch, Vivian

    2017-01-01

    Measurements of health indicators are rarely available for every population and period of interest, and available data may not be comparable. The Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER) define best reporting practices for studies that calculate health estimates for multiple populations (in time or space) using multiple information sources. Health estimates that fall within the scope of GATHER include all quantitative population-level estimates (including global, regional, national, or subnational estimates) of health indicators, including indicators of health status, incidence and prevalence of diseases, injuries, and disability and functioning; and indicators of health determinants, including health behaviours and health exposures. GATHER comprises a checklist of 18 items that are essential for best reporting practice. A more detailed explanation and elaboration document, describing the interpretation and rationale of each reporting item along with examples of good reporting, is available on the GATHER website (http://gather-statement.org).

  10. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    PubMed

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.

  11. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  12. Polynomial fitting of DT-MRI fiber tracts allows accurate estimation of muscle architectural parameters.

    PubMed

    Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua

    2012-06-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.

  13. [Estimating weight accurately for safe treatment: body weight estimation in patients with acute ischaemic stroke is frequently inaccurate].

    PubMed

    van de Stadt, Stephanie I W; van Schaik, Sander M; van den Berg-Vos, Renske M

    2015-01-01

    Patients with acute ischaemic stroke should receive intravenous thrombolysis with 0.9 mg/kg of recombinant tissue plasminogen activator as quickly as possible. In order to reduce the door-to-needle time, many physicians estimate the patient's body weight. However, these estimates are frequently inaccurate and inaccuracy can lead to dosage errors. According to a meta-analysis in a Cochrane study, the risk of developing intracranial haemorrhage is almost tripled for patients treated with higher thrombolytic doses, compared with patients receiving a dosage based on accurate weight measurements (odds ratio: 2.71). Only 28% of physicians estimate to within 5 kilograms of actual body weight. In order to reduce the risk of complications, patients arriving at the emergency room should be weighted with a scale. Alternatively, the body weight can be estimated using a validated nomogram.

  14. Robust and accurate fundamental frequency estimation based on dominant harmonic components.

    PubMed

    Nakatani, Tomohiro; Irino, Toshio

    2004-12-01

    This paper presents a new method for robust and accurate fundamental frequency (F0) estimation in the presence of background noise and spectral distortion. Degree of dominance and dominance spectrum are defined based on instantaneous frequencies. The degree of dominance allows one to evaluate the magnitude of individual harmonic components of the speech signals relative to background noise while reducing the influence of spectral distortion. The fundamental frequency is more accurately estimated from reliable harmonic components which are easy to select given the dominance spectra. Experiments are performed using white and babble background noise with and without spectral distortion as produced by a SRAEN filter. The results show that the present method is better than previously reported methods in terms of both gross and fine F0 errors.

  15. Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude

    DTIC Science & Technology

    2009-12-01

    TRACKER SYSTEM FOR ACCURATE ESTIMATION OF SPACECRAFT ATTITUDE by Jack A. Tappe December 2009 Thesis Co-Advisors: Jae Jun Kim Brij N... Brij N. Agrawal Co-Advisor Dr. Knox T. Millsaps Chairman, Department of Mechanical and Astronautical Engineering iv THIS PAGE...much with my studies here. I would like to especially thank Professors Barry Leonard, Brij Agrawal, Grand Master Shin, and Comrade Oleg Yakimenko

  16. READSCAN: a fast and scalable pathogen discovery program with accurate genome relative abundance estimation.

    PubMed

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2013-02-01

    READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). http://cbrc.kaust.edu.sa/readscan.

  17. Estimation and evaluation of COSMIC radio occultation excess phase using undifferenced measurements

    NASA Astrophysics Data System (ADS)

    Xia, Pengfei; Ye, Shirong; Jiang, Kecai; Chen, Dezhong

    2017-05-01

    In the GPS radio occultation technique, the atmospheric excess phase (AEP) can be used to derive the refractivity, which is an important quantity in numerical weather prediction. The AEP is conventionally estimated based on GPS double-difference or single-difference techniques. These two techniques, however, rely on the reference data in the data processing, increasing the complexity of computation. In this study, an undifferenced (ND) processing strategy is proposed to estimate the AEP. To begin with, we use PANDA (Positioning and Navigation Data Analyst) software to perform the precise orbit determination (POD) for the purpose of acquiring the position and velocity of the mass centre of the COSMIC (The Constellation Observing System for Meteorology, Ionosphere and Climate) satellites and the corresponding receiver clock offset. The bending angles, refractivity and dry temperature profiles are derived from the estimated AEP using Radio Occultation Processing Package (ROPP) software. The ND method is validated by the COSMIC products in typical rising and setting occultation events. Results indicate that rms (root mean square) errors of relative refractivity differences between undifferenced and atmospheric profiles (atmPrf) provided by UCAR/CDAAC (University Corporation for Atmospheric Research/COSMIC Data Analysis and Archive Centre) are better than 4 and 3 % in rising and setting occultation events respectively. In addition, we also compare the relative refractivity bias between ND-derived methods and atmPrf profiles of globally distributed 200 COSMIC occultation events on 12 December 2013. The statistical results indicate that the average rms relative refractivity deviation between ND-derived and COSMIC profiles is better than 2 % in the rising occultation event and better than 1.7 % in the setting occultation event. Moreover, the observed COSMIC refractivity profiles from ND processing strategy are further validated using European Centre for Medium

  18. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  19. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    PubMed Central

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  20. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  1. The description of a method for accurately estimating creatinine clearance in acute kidney injury.

    PubMed

    Mellas, John

    2016-05-01

    Acute kidney injury (AKI) is a common and serious condition encountered in hospitalized patients. The severity of kidney injury is defined by the RIFLE, AKIN, and KDIGO criteria which attempt to establish the degree of renal impairment. The KDIGO guidelines state that the creatinine clearance should be measured whenever possible in AKI and that the serum creatinine concentration and creatinine clearance remain the best clinical indicators of renal function. Neither the RIFLE, AKIN, nor KDIGO criteria estimate actual creatinine clearance. Furthermore there are no accepted methods for accurately estimating creatinine clearance (K) in AKI. The present study describes a unique method for estimating K in AKI using urine creatinine excretion over an established time interval (E), an estimate of creatinine production over the same time interval (P), and the estimated static glomerular filtration rate (sGFR), at time zero, utilizing the CKD-EPI formula. Using these variables estimated creatinine clearance (Ke)=E/P * sGFR. The method was tested for validity using simulated patients where actual creatinine clearance (Ka) was compared to Ke in several patients, both male and female, and of various ages, body weights, and degrees of renal impairment. These measurements were made at several serum creatinine concentrations in an attempt to determine the accuracy of this method in the non-steady state. In addition E/P and Ke was calculated in hospitalized patients, with AKI, and seen in nephrology consultation by the author. In these patients the accuracy of the method was determined by looking at the following metrics; E/P>1, E/P<1, E=P in an attempt to predict progressive azotemia, recovering azotemia, or stabilization in the level of azotemia respectively. In addition it was determined whether Ke<10 ml/min agreed with Ka and whether patients with AKI on renal replacement therapy could safely terminate dialysis if Ke was greater than 5 ml/min. In the simulated patients there

  2. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    PubMed

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  3. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities

    PubMed Central

    Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan

    2015-01-01

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993

  4. Lipase-mediated enantioselective kinetic resolution of racemic acidic drugs in non-standard organic solvents: Direct chiral liquid chromatography monitoring and accurate determination of the enantiomeric excesses.

    PubMed

    Ghanem, Ashraf; Aboul-Enein, Mohammed Nabil; El-Azzouny, Aida; El-Behairy, Mohammed F

    2010-02-12

    The enantioselective resolution of a set of racemic acidic compounds such as non-steroidal anti-inflammatory drugs (NSAIDs) of the group arylpropionic acid derivatives is demonstrated. Thus, a set of lipases were screened and manipulated in either the esterification or hydrolysis mode for the enantioselective kinetic resolution of these racemates in non-standard organic solvents. The accurate determination of the enantiomeric excesses of both substrate and product during such reaction is demonstrated. This was based on the development of a direct and reliable enantioselective high performance liquid chromatography (HPLC) procedure for the simultaneous baseline separation of both substrate and product in one run without derivatization. This was achieved using the immobilized chiral stationary phase namely Chiralpak IB, a 3,5-dimethylphenylcarbamate derivative of cellulose (the immobilized version of Chiralcel OD) which proved to be versatile for the monitoring of the lipase-catalyzed kinetic resolution of racemates in non-standard organic solvents.

  5. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  6. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  7. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images

    PubMed Central

    Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  8. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    NASA Astrophysics Data System (ADS)

    Song, N.; He, B.; Wahl, R. L.; Frey, E. C.

    2011-09-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  9. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections.

    PubMed

    Song, N; He, B; Wahl, R L; Frey, E C

    2011-09-07

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  10. Accurate estimates of age at maturity from the growth trajectories of fishes and other ectotherms.

    PubMed

    Honsey, Andrew E; Staples, David F; Venturelli, Paul A

    2017-01-01

    Age at maturity (AAM) is a key life history trait that provides insight into ecology, evolution, and population dynamics. However, maturity data can be costly to collect or may not be available. Life history theory suggests that growth is biphasic for many organisms, with a change-point in growth occurring at maturity. If so, then it should be possible to use a biphasic growth model to estimate AAM from growth data. To test this prediction, we used the Lester biphasic growth model in a likelihood profiling framework to estimate AAM from length at age data. We fit our model to simulated growth trajectories to determine minimum data requirements (in terms of sample size, precision in length at age, and the cost to somatic growth of maturity) for accurate AAM estimates. We then applied our method to a large walleye Sander vitreus data set and show that our AAM estimates are in close agreement with conventional estimates when our model fits well. Finally, we highlight the potential of our method by applying it to length at age data for a variety of ectotherms. Our method shows promise as a tool for estimating AAM and other life history traits from contemporary and historical samples. © 2016 by the Ecological Society of America.

  11. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  12. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  13. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  14. Accurate estimation of cardinal growth temperatures of Escherichia coli from optimal dynamic experiments.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2008-11-30

    Prediction of the microbial growth rate as a response to changing temperatures is an important aspect in the control of food safety and food spoilage. Accurate model predictions of the microbial evolution ask for correct model structures and reliable parameter values with good statistical quality. Given the widely accepted validity of the Cardinal Temperature Model with Inflection (CTMI) [Rosso, L., Lobry, J. R., Bajard, S. and Flandrois, J. P., 1995. Convenient model to describe the combined effects of temperature and pH on microbial growth, Applied and Environmental Microbiology, 61: 610-616], this paper focuses on the accurate estimation of its four parameters (T(min), T(opt), T(max) and micro(opt)) by applying the technique of optimal experiment design for parameter estimation (OED/PE). This secondary model describes the influence of temperature on the microbial specific growth rate from the minimum to the maximum temperature for growth. Dynamic temperature profiles are optimized within two temperature regions ([15 degrees C, 43 degrees C] and [15 degrees C, 45 degrees C]), focusing on the minimization of the parameter estimation (co)variance (D-optimal design). The optimal temperature profiles are implemented in a computer controlled bioreactor, and the CTMI parameters are identified from the resulting experimental data. Approximately equal CTMI parameter values were derived irrespective of the temperature region, except for T(max). The latter could only be estimated accurately from the optimal experiments within [15 degrees C, 45 degrees C]. This observation underlines the importance of selecting the upper temperature constraint for OED/PE as close as possible to the true T(max). Cardinal temperature estimates resulting from designs within [15 degrees C, 45 degrees C] correspond with values found in literature, are characterized by a small uncertainty error and yield a good result during validation. As compared to estimates from non-optimized dynamic

  15. READSCAN: a fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    PubMed Central

    Rashid, Mamoon; Pain, Arnab

    2013-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222

  16. Weighing Vietnamese children: how accurate are child weights adjusted for estimates of clothing weight?

    PubMed

    Tuan, T; Marsh, D R; Ha, T T; Schroeder, D G; Thach, T D; Dung, V M; Huong, N T

    2002-12-01

    Children who are weighed for growth monitoring are frequently clothed, especially in the cold weather. Health workers commonly estimate and subtract the weight of these clothes, but the accuracy of these estimates is unknown. We assessed the accuracy of child weights adjusted for estimated clothing typical of hot, cold, and extremely cold ambient temperatures. Trained field workers weighed a sample of 212 children 6 to 42 months old from the ViSION project, adjusted the weights using a job aid describing the weights of common clothing by season and age, and then weighed the clothing to calculate the actual clothing and child weights. Fieldworker estimates of the weight of the clothing that children wore during weighing were remarkably good. In nearly all cases (207 of 212; 97.7%), the difference between the estimated and actual clothing weight was less than the precision of the child scales (+/- 50 g), and most (181 of 212; 84.5%) were within 25 g. Thus, the calculated child weights were, in fact, equivalent to the actual child weights. Using simulations, we found that improperly accounting for clothing weight can overestimate weight-for-age by 0.1 to 0.4 Z score. Accurate weights are possible, even under adverse conditions. Our training methods, clothing album, and job aid might benefit nutrition research and programming in Viet Nam as well as settings with colder climates.

  17. The accurate estimation of physicochemical properties of ternary mixtures containing ionic liquids via artificial neural networks.

    PubMed

    Cancilla, John C; Díaz-Rodríguez, Pablo; Matute, Gemma; Torrecilla, José S

    2015-02-14

    The estimation of the density and refractive index of ternary mixtures comprising the ionic liquid (IL) 1-butyl-3-methylimidazolium tetrafluoroborate, 2-propanol, and water at a fixed temperature of 298.15 K has been attempted through artificial neural networks. The obtained results indicate that the selection of this mathematical approach was a well-suited option. The mean prediction errors obtained, after simulating with a dataset never involved in the training process of the model, were 0.050% and 0.227% for refractive index and density estimation, respectively. These accurate results, which have been attained only using the composition of the dissolutions (mass fractions), imply that, most likely, ternary mixtures similar to the one analyzed, can be easily evaluated utilizing this algorithmic tool. In addition, different chemical processes involving ILs can be monitored precisely, and furthermore, the purity of the compounds in the studied mixtures can be indirectly assessed thanks to the high accuracy of the model.

  18. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-07

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.

  19. The estimation of tumor cell percentage for molecular testing by pathologists is not accurate.

    PubMed

    Smits, Alexander J J; Kummer, J Alain; de Bruin, Peter C; Bol, Mijke; van den Tweel, Jan G; Seldenrijk, Kees A; Willems, Stefan M; Offerhaus, G Johan A; de Weger, Roel A; van Diest, Paul J; Vink, Aryan

    2014-02-01

    Molecular pathology is becoming more and more important in present day pathology. A major challenge for any molecular test is its ability to reliably detect mutations in samples consisting of mixtures of tumor cells and normal cells, especially when the tumor content is low. The minimum percentage of tumor cells required to detect genetic abnormalities is a major variable. Information on tumor cell percentage is essential for a correct interpretation of the result. In daily practice, the percentage of tumor cells is estimated by pathologists on hematoxylin and eosin (H&E)-stained slides, the reliability of which has been questioned. This study aimed to determine the reliability of estimated tumor cell percentages in tissue samples by pathologists. On 47 H&E-stained slides of lung tumors a tumor area was marked. The percentage of tumor cells within this area was estimated independently by nine pathologists, using categories of 0-5%, 6-10%, 11-20%, 21-30%, and so on, until 91-100%. As gold standard, the percentage of tumor cells was counted manually. On average, the range between the lowest and the highest estimate per sample was 6.3 categories. In 33% of estimates, the deviation from the gold standard was at least three categories. The mean absolute deviation was 2.0 categories (range between observers 1.5-3.1 categories). There was a significant difference between the observers (P<0.001). If 20% of tumor cells were considered the lower limit to detect a mutation, samples with an insufficient tumor cell percentage (<20%) would have been estimated to contain enough tumor cells in 27/72 (38%) observations, possibly causing false negative results. In conclusion, estimates of tumor cell percentages on H&E-stained slides are not accurate, which could result in misinterpretation of test results. Reliability could possibly be improved by using a training set with feedback.

  20. Accurate coronary centerline extraction, caliber estimation and catheter detection in angiographies.

    PubMed

    Hernandez-Vela, Antonio; Gatta, Carlo; Escalera, Sergio; Igual, Laura; Martin-Yuste, Victoria; Sabate, Manel; Radeva, Petia

    2012-11-01

    Segmentation of coronary arteries in X-Ray angiography is a fundamental tool to evaluate arterial diseases and choose proper coronary treatment. The accurate segmentation of coronary arteries has become an important topic for the registration of different modalities which allows physicians rapid access to different medical imaging information from Computed Tomography (CT) scans or Magnetic Resonance Imaging (MRI). In this paper, we propose an accurate fully automatic algorithm based on Graph-cuts for vessel centerline extraction, caliber estimation, and catheter detection. Vesselness, geodesic paths, and a new multi-scale edgeness map are combined to customize the Graph-cuts approach to the segmentation of tubular structures, by means of a global optimization of the Graph-cuts energy function. Moreover, a novel supervised learning methodology that integrates local and contextual information is proposed for automatic catheter detection. We evaluate the method performance on three datasets coming from different imaging systems. The method performs as good as the expert observer w.r.t. centerline detection and caliber estimation. Moreover, the method discriminates between arteries and catheter with an accuracy of 96.5%, sensitivity of 72%, and precision of 97.4%.

  1. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates

    NASA Astrophysics Data System (ADS)

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-01

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  2. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  3. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-14

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  4. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    PubMed

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.

  5. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    NASA Astrophysics Data System (ADS)

    Granata, Daniele; Carnevale, Vincenzo

    2016-08-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.

  6. More accurate estimation of diffusion tensor parameters using diffusion Kurtosis imaging.

    PubMed

    Veraart, Jelle; Poot, Dirk H J; Van Hecke, Wim; Blockx, Ines; Van der Linden, Annemie; Verhoye, Marleen; Sijbers, Jan

    2011-01-01

    With diffusion tensor imaging, the diffusion of water molecules through brain structures is quantified by parameters, which are estimated assuming monoexponential diffusion-weighted signal attenuation. The estimated diffusion parameters, however, depend on the diffusion weighting strength, the b-value, which hampers the interpretation and comparison of various diffusion tensor imaging studies. In this study, a likelihood ratio test is used to show that the diffusion kurtosis imaging model provides a more accurate parameterization of both the Gaussian and non-Gaussian diffusion component compared with diffusion tensor imaging. As a result, the diffusion kurtosis imaging model provides a b-value-independent estimation of the widely used diffusion tensor parameters as demonstrated with diffusion-weighted rat data, which was acquired with eight different b-values, uniformly distributed in a range of [0,2800 sec/mm(2)]. In addition, the diffusion parameter values are significantly increased in comparison to the values estimated with the diffusion tensor imaging model in all major rat brain structures. As incorrectly assuming additive Gaussian noise on the diffusion-weighted data will result in an overestimated degree of non-Gaussian diffusion and a b-value-dependent underestimation of diffusivity measures, a Rician noise model was used in this study. © 2010 Wiley-Liss, Inc.

  7. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    PubMed Central

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  8. Do modelled or satellite-based estimates of surface solar irradiance accurately describe its temporal variability?

    NASA Astrophysics Data System (ADS)

    Bengulescu, Marc; Blanc, Philippe; Boilley, Alexandre; Wald, Lucien

    2017-02-01

    This study investigates the characteristic time-scales of variability found in long-term time-series of daily means of estimates of surface solar irradiance (SSI). The study is performed at various levels to better understand the causes of variability in the SSI. First, the variability of the solar irradiance at the top of the atmosphere is scrutinized. Then, estimates of the SSI in cloud-free conditions as provided by the McClear model are dealt with, in order to reveal the influence of the clear atmosphere (aerosols, water vapour, etc.). Lastly, the role of clouds on variability is inferred by the analysis of in-situ measurements. A description of how the atmosphere affects SSI variability is thus obtained on a time-scale basis. The analysis is also performed with estimates of the SSI provided by the satellite-derived HelioClim-3 database and by two numerical weather re-analyses: ERA-Interim and MERRA2. It is found that HelioClim-3 estimates render an accurate picture of the variability found in ground measurements, not only globally, but also with respect to individual characteristic time-scales. On the contrary, the variability found in re-analyses correlates poorly with all scales of ground measurements variability.

  9. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three

  10. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    PubMed Central

    Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-01-01

    Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140

  11. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  12. MIDAS robust trend estimator for accurate GPS station velocities without step detection.

    PubMed

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij  = (xj-xi )/(tj-ti ) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  13. Estimates of radiation doses in tissue and organs and risk of excess cancer in the single-course radiotherapy patients treated for ankylosing spondylitis in England and Wales

    SciTech Connect

    Fabrikant, J.I.; Lyman, J.T.

    1982-02-01

    The estimates of absorbed doses of x rays and excess risk of cancer in bone marrow and heavily irradiated sites are extremely crude and are based on very limited data and on a number of assumptions. Some of these assumptions may later prove to be incorrect, but it is probable that they are correct to within a factor of 2. The excess cancer risk estimates calculated compare well with the most reliable epidemiological surveys thus far studied. This is particularly important for cancers of heavily irradiated sites with long latent periods. The mean followup period for the patients was 16.2 y, and an increase in cancers of heavily irradiated sites may appear in these patients in the 1970s in tissues and organs with long latent periods for the induction of cancer. The accuracy of these estimates is severely limited by the inadequacy of information on doses absorbed by the tissues at risk in the irradiated patients. The information on absorbed dose is essential for an accurate assessment of dose-cancer incidence analysis. Furthermore, in this valuable series of irradiated patients, the information on radiation dosimetry on the radiotherapy charts is central to any reliable determination of somatic risks of radiation with regard to carcinogenesis in man. The work necessary to obtain these data is under way; only when they are available can more precise estimates of risk of cancer induction by radiation in man be obtained.

  14. Accurate Relative Location Estimates for the North Korean Nuclear Tests Using Empirical Slowness Corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.

    2016-10-01

    modified velocity gradients reduce the residuals, the relative location uncertainties, and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  15. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  16. How accurate is size-specific dose estimate in pediatric body CT examinations?

    PubMed

    Karmazyn, Boaz; Ai, Huisi; Klahr, Paul; Ouyang, Fangqian; Jennings, S Gregory

    2016-08-01

    Size-specific dose estimate is gaining increased acceptance as the preferred index of CT dose in children. However it was developed based on non-clinical data. To compare the accuracy of size-specific dose estimate (SSDE) based on geometric and body weight measures in pediatric chest and abdomen CT scans, versus the more accurate [Formula: see text] (mean SSDE based on water-equivalent diameter). We retrospectively identified 50 consecutive children (age <18 years) who underwent chest CT examination and 50 children who underwent abdomen CT. We measured anteroposterior diameter (DAP) and lateral diameter (DLAT) at the central slice (of scan length) of each patient and calculated DAP+LAT (anteroposterior diameter plus lateral diameter) and DED (effective diameter) for each patient. We calculated the following in each child: (1) SSDEs based on DAP, DLAT, DAP+LAT, DED, and body weight, and (2) SSDE based on software calculation of mean water-equivalent diameter ([Formula: see text] adopted standard within our study). We used intraclass correlation coefficient (ICC) and Bland-Altman analysis to compare agreement between the SSDEs and [Formula: see text]. Gender and age distribution were similar between chest and abdomen CT groups; mean body weight was 37 kg for both groups, with ranges of 6-130 kg (chest) and 8-107 kg (abdomen). SSDEs had very strong agreement (ICC>0.9) with [Formula: see text]. SSDEs based on DLAT had 95% limits of agreement of up to 43% with [Formula: see text]. SSDEs based on other parameters (body weight, DAP, DAP+LAT, DED) had 95% limits of agreement of up to 25%. Differences between SSDEs calculated using various indications of patient size (geometric indices and patient weight) and the more accurate [Formula: see text] calculated using proprietary software were generally small, with the possible exception for lateral diameter, and provide acceptable dose estimates for body CT in children.

  17. Excessive heat and respiratory hospitalizations in New York State: estimating current and future public health burden related to climate change.

    PubMed

    Lin, Shao; Hsu, Wan-Hsiang; Van Zutphen, Alissa R; Saha, Shubhayu; Luber, George; Hwang, Syni-An

    2012-11-01

    Although many climate-sensitive environmental exposures are related to mortality and morbidity, there is a paucity of estimates of the public health burden attributable to climate change. We estimated the excess current and future public health impacts related to respiratory hospitalizations attributable to extreme heat in summer in New York State (NYS) overall, its geographic regions, and across different demographic strata. On the basis of threshold temperature and percent risk changes identified from our study in NYS, we estimated recent and future attributable risks related to extreme heat due to climate change using the global climate model with various climate scenarios. We estimated effects of extreme high apparent temperature in summer on respiratory admissions, days hospitalized, direct hospitalization costs, and lost productivity from days hospitalized after adjusting for inflation. The estimated respiratory disease burden attributable to extreme heat at baseline (1991-2004) in NYS was 100 hospital admissions, US$644,069 in direct hospitalization costs, and 616 days of hospitalization per year. Projections for 2080-2099 based on three different climate scenarios ranged from 206-607 excess hospital admissions, US$26-$76 million in hospitalization costs, and 1,299-3,744 days of hospitalization per year. Estimated impacts varied by geographic region and population demographics. We estimated that excess respiratory admissions in NYS due to excessive heat would be 2 to 6 times higher in 2080-2099 than in 1991-2004. When combined with other heat-associated diseases and mortality, the potential public health burden associated with global warming could be substantial.

  18. Excessive Heat and Respiratory Hospitalizations in New York State: Estimating Current and Future Public Health Burden Related to Climate Change

    PubMed Central

    Hsu, Wan-Hsiang; Van Zutphen, Alissa R.; Saha, Shubhayu; Luber, George; Hwang, Syni-An

    2012-01-01

    Background: Although many climate-sensitive environmental exposures are related to mortality and morbidity, there is a paucity of estimates of the public health burden attributable to climate change. Objective: We estimated the excess current and future public health impacts related to respiratory hospitalizations attributable to extreme heat in summer in New York State (NYS) overall, its geographic regions, and across different demographic strata. Methods: On the basis of threshold temperature and percent risk changes identified from our study in NYS, we estimated recent and future attributable risks related to extreme heat due to climate change using the global climate model with various climate scenarios. We estimated effects of extreme high apparent temperature in summer on respiratory admissions, days hospitalized, direct hospitalization costs, and lost productivity from days hospitalized after adjusting for inflation. Results: The estimated respiratory disease burden attributable to extreme heat at baseline (1991–2004) in NYS was 100 hospital admissions, US$644,069 in direct hospitalization costs, and 616 days of hospitalization per year. Projections for 2080–2099 based on three different climate scenarios ranged from 206–607 excess hospital admissions, US$26–$76 million in hospitalization costs, and 1,299–3,744 days of hospitalization per year. Estimated impacts varied by geographic region and population demographics. Conclusions: We estimated that excess respiratory admissions in NYS due to excessive heat would be 2 to 6 times higher in 2080–2099 than in 1991–2004. When combined with other heat-associated diseases and mortality, the potential public health burden associated with global warming could be substantial. PMID:22922791

  19. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data.

    PubMed

    Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A

    2016-05-01

    The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.

  20. Accurate estimation of the RMS emittance from single current amplifier data

    SciTech Connect

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-05-31

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.

  1. Accurate Estimation of the RMS Emittance from Single Current Amplifier Data

    NASA Astrophysics Data System (ADS)

    Stockli, Martin P.; Welton, R. F.; Keller, R.; Letchford, A. P.; Thomae, R. W.; Thomason, J. W. G.

    2002-11-01

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H- ion source.

  2. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  3. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  4. Magnetic dipole moment estimation and compensation for an accurate attitude control in nano-satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Sako, Nobutada; Nakasuka, Shinichi

    2011-06-01

    Nano-satellites provide space access to broader range of satellite developers and attract interests as an application of the space developments. These days several new nano-satellite missions are proposed with sophisticated objectives such as remote-sensing and observation of astronomical objects. In these advanced missions, some nano-satellites must meet strict attitude requirements for obtaining scientific data or images. For LEO nano-satellite, a magnetic attitude disturbance dominates over other environmental disturbances as a result of small moment of inertia, and this effect should be cancelled for a precise attitude control. This research focuses on how to cancel the magnetic disturbance in orbit. This paper presents a unique method to estimate and compensate the residual magnetic moment, which interacts with the geomagnetic field and causes the magnetic disturbance. An extended Kalman filter is used to estimate the magnetic disturbance. For more practical considerations of the magnetic disturbance compensation, this method has been examined in the PRISM (Pico-satellite for Remote-sensing and Innovative Space Missions). This method will be also used for a nano-astrometry satellite mission. This paper concludes that use of the magnetic disturbance estimation and compensation are useful for nano-satellites missions which require a high accurate attitude control.

  5. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera

    NASA Astrophysics Data System (ADS)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.

  6. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  7. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    PubMed

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  8. Does bioelectrical impedance analysis accurately estimate the condition of threatened and endangered desert fish species?

    USGS Publications Warehouse

    Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.

    2017-01-01

    Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.

  9. Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.

    2014-12-01

    The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering

  10. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  11. Estimate of excess uranium in surface soil surrounding the Feed Materials Production Center using a requalified data base.

    PubMed

    Stevenson, K A; Hardy, E P

    1993-09-01

    A conservative estimate of the excess total uranium in the top 5 cm of soil surrounding the former Feed Materials Production Center was made using a data base compiled by the International Technology Corporation in 1986, and the requalification of that data base was completed in 1988. The results indicate that within an area of 8 km2, extending 2 km both northeast and southwest of the Feed Materials Production Center, the uranium concentration is between 2 and 5 times greater than average natural background radiation levels. More than 85% of this excess uranium is deposited within 1 km of the site boundary. The presence of any excess uranium outside of this area is indistinguishable from the natural background contribution.

  12. Spot urine sodium measurements do not accurately estimate dietary sodium intake in chronic kidney disease12

    PubMed Central

    Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl AM; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H

    2016-01-01

    Background: Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. Objective: We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. Design: We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3–4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. Results: The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min–1 · 1.73 m–2. The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: −8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. Conclusion: These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this

  13. Spot urine sodium measurements do not accurately estimate dietary sodium intake in chronic kidney disease.

    PubMed

    Dougher, Carly E; Rifkin, Dena E; Anderson, Cheryl Am; Smits, Gerard; Persky, Martha S; Block, Geoffrey A; Ix, Joachim H

    2016-08-01

    Sodium intake influences blood pressure and proteinuria, yet the impact on long-term outcomes is uncertain in chronic kidney disease (CKD). Accurate assessment is essential for clinical and public policy recommendations, but few large-scale studies use 24-h urine collections. Recent studies that used spot urine sodium and associated estimating equations suggest that they may provide a suitable alternative, but their accuracy in patients with CKD is unknown. We compared the accuracy of 4 equations [the Nerbass, INTERSALT (International Cooperative Study on Salt, Other Factors, and Blood Pressure), Tanaka, and Kawasaki equations] that use spot urine sodium to estimate 24-h sodium excretion in patients with moderate to advanced CKD. We evaluated the accuracy of spot urine sodium to predict mean 24-h urine sodium excretion over 9 mo in 129 participants with stage 3-4 CKD. Spot morning urine sodium was used in 4 estimating equations. Bias, precision, and accuracy were assessed and compared across each equation. The mean age of the participants was 67 y, 52% were female, and the mean estimated glomerular filtration rate was 31 ± 9 mL · min(-1) · 1.73 m(-2) The mean ± SD number of 24-h urine collections was 3.5 ± 0.8/participant, and the mean 24-h sodium excretion was 168.2 ± 67.5 mmol/d. Although the Tanaka equation demonstrated the least bias (mean: -8.2 mmol/d), all 4 equations had poor precision and accuracy. The INTERSALT equation demonstrated the highest accuracy but derived an estimate only within 30% of mean measured sodium excretion in only 57% of observations. Bland-Altman plots revealed systematic bias with the Nerbass, INTERSALT, and Tanaka equations, underestimating sodium excretion when intake was high. These findings do not support the use of spot urine specimens to estimate dietary sodium intake in patients with CKD and research studies enriched with patients with CKD. The parent data for this study come from a clinical trial that was registered at

  14. Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome

    NASA Astrophysics Data System (ADS)

    Teschendorff, Andrew E.; Enver, Tariq

    2017-06-01

    The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes.

  15. Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome

    PubMed Central

    Teschendorff, Andrew E.; Enver, Tariq

    2017-01-01

    The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes. PMID:28569836

  16. Greater contrast in Martian hydrological history from more accurate estimates of paleodischarge

    NASA Astrophysics Data System (ADS)

    Jacobsen, R. E.; Burr, D. M.

    2016-09-01

    Correlative width-discharge relationships from the Missouri River Basin are commonly used to estimate fluvial paleodischarge on Mars. However, hydraulic geometry provides alternative, and causal, width-discharge relationships derived from broader samples of channels, including those in reduced-gravity (submarine) environments. Comparison of these relationships implies that causal relationships from hydraulic geometry should yield more accurate and more precise discharge estimates. Our remote analysis of a Martian-terrestrial analog channel, combined with in situ discharge data, substantiates this implication. Applied to Martian features, these results imply that paleodischarges of interior channels of Noachian-Hesperian (~3.7 Ga) valley networks have been underestimated by a factor of several, whereas paleodischarges for smaller fluvial deposits of the Late Hesperian-Early Amazonian (~3.0 Ga) have been overestimated. Thus, these new paleodischarges significantly magnify the contrast between early and late Martian hydrologic activity. Width-discharge relationships from hydraulic geometry represent validated tools for quantifying fluvial input near candidate landing sites of upcoming missions.

  17. Can student health professionals accurately estimate alcohol content in commonly occurring drinks?

    PubMed Central

    Sinclair, Julia; Searle, Emma

    2016-01-01

    Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344

  18. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    NASA Astrophysics Data System (ADS)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  19. Which is the most accurate formula to estimate fetal weight in women with severe preterm preeclampsia?

    PubMed

    Geerts, Lut; Widmer, Tania

    2011-02-01

    To identify the most accurate formula to estimate fetal weight (EFW) from ultrasound parameters in severe preterm preeclampsia. In a prospective study, serial ultrasound assessments were performed in 123 women with severe preterm preeclampsia. The EFW, calculated for 111 live born, normal, singleton fetuses within 7 days of delivery using 38 published formulae, was compared to the actual birth weight (ABW). Accuracy was assessed by correlations, mean (absolute and signed) (%) errors, % correct predictions within 5-20% of ABW and limits of agreement. Accuracy was highly variable. Most formulae systematically overestimated ABW. Five Hadlock formulae utilizing three or four variables and Woo 3 formula had the highest accuracy and did not differ significantly (mean absolute % errors 6.8-7.2%, SDs 5.3-5.8%, > 75% of estimations within 10% of ABW and 95% limits of agreement between -18/20% and +14/15%). They were not negatively affected by clinical variables but had some inconsistency in bias over the ABW range. All other formulae, including those targeted for small, preterm or growth restricted fetuses, were inferior and/or affected by multiple clinical variables. In this GA window, Hadlock formulae using three or four variables or Woo 3 formula can be recommended.

  20. mBEEF: an accurate semi-local Bayesian error estimation density functional.

    PubMed

    Wellendorff, Jess; Lundgaard, Keld T; Jacobsen, Karsten W; Bligaard, Thomas

    2014-04-14

    We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

  1. The potential of more accurate InSAR covariance matrix estimation for land cover mapping

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin

    2017-04-01

    Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.

  2. Estimating the energetic cost of feeding excess dietary nitrogen to dairy cows

    USDA-ARS?s Scientific Manuscript database

    Feeding N in excess of requirements could require the use of additional energy to synthesize and excrete urea, however, the amount energy required is unclear. Little progress has been made on this topic in recent decades so an extension of work published in 1970 was conducted to quantify the effect ...

  3. Does venous blood gas analysis provide accurate estimates of hemoglobin oxygen affinity?

    PubMed

    Huber, Fabienne L; Latshang, Tsogyal D; Goede, Jeroen S; Bloch, Konrad E

    2013-04-01

    Alterations in hemoglobin oxygen affinity can be detected by exposing blood to different PO2 and recording oxygen saturation, a method termed tonometry. It is the gold standard to measure the PO2 associated with 50 % oxygen saturation, the index used to quantify oxygen affinity (P50Tono). P50Tono is used in the evaluation of patients with erythrocytosis suspected to have hemoglobin with abnormal oxygen affinity. Since tonometry is labor intensive and not generally available, we investigated whether accurate estimates of P50 could also be obtained by venous blood gas analysis, co-oximetry, and standard equations (P50Ven). In 50 patients referred for evaluation of erythrocytosis, pH, PO2, and oxygen saturation were measured in venous blood to estimate P50Ven; P50Tono was measured for comparison. Agreement among P50Ven and P50Tono was evaluated (Bland-Altman analysis). Mean P50Tono was 25.8 (range 17.4-34.1) mmHg. The mean difference (bias) of P50Tono-P50Ven was 0.5 mmHg; limits of agreement (95 % confidence limits) were -5.2 to +6.1 mmHg. The sensitivity and specificity of P50Ven to identify the 25 patients with P50Tono outside the normal range of 22.9-26.8 mmHg were 5 and 77 %, respectively. We conclude that estimates of P50 based on venous blood gas analysis and standard equations have a low bias compared to tonometry. However, the precision of P50Ven is not sufficiently high to replace P50Tono in the evaluation of individual patients with suspected disturbances of hemoglobin oxygen affinity.

  4. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    PubMed

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html.

  5. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  6. Accurate optical flow field estimation using mechanical properties of soft tissues

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Karimi, Hirad; Samani, Abbas

    2009-02-01

    A novel optical flow based technique is presented in this paper to measure the nodal displacements of soft tissue undergoing large deformations. In hyperelasticity imaging, soft tissues maybe compressed extensively [1] and the deformation may exceed the number of pixels ordinary optical flow approaches can detect. Furthermore in most biomedical applications there is a large amount of image information that represent the geometry of the tissue and the number of tissue types present in the organ of interest. Such information is often ignored in applications such as image registration. In this work we incorporate the information pertaining to soft tissue mechanical behavior (Neo-Hookean hyperelastic model is used here) in addition to the tissue geometry before compression into a hierarchical Horn-Schunck optical flow method to overcome this large deformation detection weakness. Applying the proposed method to a phantom using several compression levels proved that it yields reasonably accurate displacement fields. Estimated displacement results of this phantom study obtained for displacement fields of 85 pixels/frame and 127 pixels/frame are reported and discussed in this paper.

  7. Optimization of Correlation Kernel Size for Accurate Estimation of Myocardial Contraction and Relaxation

    NASA Astrophysics Data System (ADS)

    Honjo, Yasunori; Hasegawa, Hideyuki; Kanai, Hiroshi

    2012-07-01

    rates estimated using different kernel sizes were examined using the normalized mean-squared error of the estimated strain rate from the actual one obtained by the 1D phase-sensitive method. Compared with conventional kernel sizes, this result shows the possibility of the proposed correlation kernel to enable more accurate measurement of the strain rate. In in vivo measurement, the regional instantaneous velocities and strain rates in the radial direction of the heart wall were analyzed in detail at an extremely high temporal resolution (frame rate of 860 Hz). In this study, transition in contraction and relaxation was able to be detected by 2D tracking. These results indicate the potential of this method in the high-accuracy estimation of the strain rates and detailed analyses of the physiological function of the myocardium.

  8. How accurately can we estimate energetic costs in a marine top predator, the king penguin?

    PubMed

    Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J

    2007-01-01

    King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.

  9. A novel method based on two cameras for accurate estimation of arterial oxygen saturation.

    PubMed

    Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei

    2015-05-30

    Photoplethysmographic imaging (PPGi) that is based on camera allows acquiring photoplethysmogram and measuring physiological parameters such as pulse rate, respiration rate and perfusion level. It has also shown potential for estimation of arterial oxygen saturation (SaO2). However, there are some technical limitations such as optical shunting, different camera sensitivity to different light spectra, different AC-to-DC ratios (the peak-to-peak amplitude to baseline ratio) of the PPGi signal for different portions of the sensor surface area, the low sampling rate and the inconsistency of contact force between the fingertip and camera lens. In this paper, we take full account of the above-mentioned design challenges and present an accurate SaO2 estimation method based on two cameras. The hardware system we used consisted of an FPGA development board (XC6SLX150T-3FGG676 from Xilinx), with connected to it two commercial cameras and an SD card. The two cameras were placed back to back, one camera acquired PPGi signal from the right index fingertip under 660 nm light illumination while the other camera acquired PPGi signal from the thumb fingertip using an 800 nm light illumination. The both PPGi signals were captured simultaneously, recorded in a text file on the SD card and processed offline using MATLAB®. The calculation of SaO2 was based on the principle of pulse oximetry. The AC-to-DC ratio was acquired by the ratio of powers of AC and DC components of the PPGi signal in the time-frequency domain using the smoothed pseudo Wigner-Ville distribution. The calibration curve required for SaO2 measurement was obtained by linear regression analysis. The results of our estimation method from 12 subjects showed a high correlation and accuracy with those of conventional pulse oximetry for the range from 90 to 100%. Our method is suitable for mobile applications implemented in smartphones, which could allow SaO2 measurement in a pervasive environment.

  10. Transthoracic echocardiography: an accurate and precise method for estimating cardiac output in the critically ill patient.

    PubMed

    Mercado, Pablo; Maizel, Julien; Beyls, Christophe; Titeca-Beauport, Dimitri; Joris, Magalie; Kontar, Loay; Riviere, Antoine; Bonef, Olivier; Soupison, Thierry; Tribouilloy, Christophe; de Cagny, Bertrand; Slama, Michel

    2017-06-09

    % yielded a sensitivity of 88% and specificity of 66% for detecting a ΔCO-PAC of more than 10%. In critically ill mechanically ventilated patients, CO-TTE is an accurate and precise method for estimating CO. Furthermore, CO-TTE can accurately track variations in CO.

  11. Estimation of sedimentation rates based on the excess of radium 228 in granitic reservoir sediments.

    PubMed

    Reyss, Jean-Louis; Mangeret, Arnaud; Courbet, Christelle; Bassot, Sylvain; Alcalde, Gilles; Thouvenot, Antoine; Guillevic, Jérôme

    2016-10-01

    Knowledge of sedimentation rates in lakes is required to understand and quantify the geochemical processes involved in scavenging and remobilization of contaminants at the Sediment-Water Interface (SWI). The well-known (210)Pb excess ((210)Pbex) method cannot be used for quantifying sedimentation rates in uranium-enriched catchments, as large amounts of (210)Pb produced by weathering and human activities may dilute the atmospheric (210)Pb. As an alternative dating method in these cases, we propose an original method based on (232)Th decay series nuclides. This study focuses on an artificial lake located in a granitic catchment downstream from a former uranium mine site. The exponential decay of (228)Ra excess ((228)Raex) with depth in two long cores yields sedimentation rates of 2.4 and 5.2 cm yr(-1) respectively. These sedimentation rates lead to the attribution of the (137)Cs activity peak observed at depth to the Chernobyl fallout event of 1986. The (228)Raex method was also applied to two short cores which did not display the (137)Cs peak, and mean sedimentation rates of 2.1 and 4.0 cm y(-1) were deduced. The proposed method may replace the classical radiochronological methods ((210)Pbex, (137)Cs) to determine sedimentation rates in granitic catchments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Estimation of physical activity and prevalence of excessive body mass in rural and urban Polish adolescents.

    PubMed

    Hoffmann, Karolina; Bryl, Wiesław; Marcinkowski, Jerzy T; Strażyńska, Agata; Pupek-Musialik, Danuta

    2011-01-01

    Excessive body mass and sedentary lifestyle are well-known factors for cardiovascular risk, which when present in the young population may have significant health consequences, both in the short- and long-term. The aim of the study was to evaluate the prevalence of overweight, obesity, and sedentary lifestyle in two teenage populations living in an urban or rural area. An additional aim was to compare their physical activity. The study was designed and conducted in 2009. The study population consisted of 116 students aged 15-17 years - 61 males (52.7%) and 55 females (47.3%), randomly selected from public junior grammar schools and secondary schools in the Poznań Region. There were 61 respondents from a rural area - 32 males (52.5%) and 29 females (47.5%), whereas 55 teenagers lived in an urban area - 29 males (47.5%) and 26 females (47.3%). Students were asked to complete a questionnaire, which was especially prepared for the study and contained questions concerning health and lifestyle. A basic physical examination was carried out in all 116 students, including measurements of the anthropometric features. Calculations were performed using the statistical package STATISTICA (data analysis software system), Version. 8.0. When comparing these two populations, no statistically significant differences were detected in the ratio of weight-growth, with the exception of the fact that the urban youths had a larger hip circumference (97.1 v. 94.3 cm, p<0.05). In the group of urban students there were also significantly more subjects with excessive body weight (27.3% v. 24.6%, p<0.05), with a predominant proportion of obese students (60%). There were significantly more male obese individuals (66.7%). In the population of rural teenagers, obesity rate did not differ statistically significantly from the percentage of overweight (11.5% v. 13.1%, p>0.05), the problem of excessive weight affected both sexes in a similar proportion (25% boys and 24.1% girls, p>0.05). In this

  13. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  14. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters☆

    PubMed Central

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2015-01-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. PMID:25087857

  15. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  17. Children Can Accurately Monitor and Control Their Number-Line Estimation Performance

    ERIC Educational Resources Information Center

    Wall, Jenna L.; Thompson, Clarissa A.; Dunlosky, John; Merriman, William E.

    2016-01-01

    Accurate monitoring and control are essential for effective self-regulated learning. These metacognitive abilities may be particularly important for developing math skills, such as when children are deciding whether a math task is difficult or whether they made a mistake on a particular item. The present experiments investigate children's ability…

  18. Children Can Accurately Monitor and Control Their Number-Line Estimation Performance

    ERIC Educational Resources Information Center

    Wall, Jenna L.; Thompson, Clarissa A.; Dunlosky, John; Merriman, William E.

    2016-01-01

    Accurate monitoring and control are essential for effective self-regulated learning. These metacognitive abilities may be particularly important for developing math skills, such as when children are deciding whether a math task is difficult or whether they made a mistake on a particular item. The present experiments investigate children's ability…

  19. Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.

    USDA-ARS?s Scientific Manuscript database

    Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...

  20. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  1. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  2. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  3. Improved patient size estimates for accurate dose calculations in abdomen computed tomography

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Lae

    2017-07-01

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  4. Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter

    NASA Astrophysics Data System (ADS)

    Strano, Salvatore; Terzo, Mario

    2016-06-01

    The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.

  5. Accurate liability estimation improves power in ascertained case-control studies.

    PubMed

    Weissbrod, Omer; Lippert, Christoph; Geiger, Dan; Heckerman, David

    2015-04-01

    Linear mixed models (LMMs) have emerged as the method of choice for confounded genome-wide association studies. However, the performance of LMMs in nonrandomly ascertained case-control studies deteriorates with increasing sample size. We propose a framework called LEAP (liability estimator as a phenotype; https://github.com/omerwe/LEAP) that tests for association with estimated latent values corresponding to severity of phenotype, and we demonstrate that this can lead to a substantial power increase.

  6. Robust and Accurate Vision-Based Pose Estimation Algorithm Based on Four Coplanar Feature Points

    PubMed Central

    Zhang, Zimiao; Zhang, Shihai; Li, Qiu

    2016-01-01

    Vision-based pose estimation is an important application of machine vision. Currently, analytical and iterative methods are used to solve the object pose. The analytical solutions generally take less computation time. However, the analytical solutions are extremely susceptible to noise. The iterative solutions minimize the distance error between feature points based on 2D image pixel coordinates. However, the non-linear optimization needs a good initial estimate of the true solution, otherwise they are more time consuming than analytical solutions. Moreover, the image processing error grows rapidly with measurement range increase. This leads to pose estimation errors. All the reasons mentioned above will cause accuracy to decrease. To solve this problem, a novel pose estimation method based on four coplanar points is proposed. Firstly, the coordinates of feature points are determined according to the linear constraints formed by the four points. The initial coordinates of feature points acquired through the linear method are then optimized through an iterative method. Finally, the coordinate system of object motion is established and a method is introduced to solve the object pose. The growing image processing error causes pose estimation errors the measurement range increases. Through the coordinate system, the pose estimation errors could be decreased. The proposed method is compared with two other existing methods through experiments. Experimental results demonstrate that the proposed method works efficiently and stably. PMID:27999338

  7. Accurate and efficient velocity estimation using Transmission matrix formalism based on the domain decomposition method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong

    2017-03-01

    Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.

  8. Accurate kinetic parameter estimation during progress curve analysis of systems with endogenous substrate production.

    PubMed

    Goudar, Chetan T

    2011-10-01

    We have identified an error in the published integral form of the modified Michaelis-Menten equation that accounts for endogenous substrate production. The correct solution is presented and the error in both the substrate concentration, S, and the kinetic parameters Vm , Km , and R resulting from the incorrect solution was characterized. The incorrect integral form resulted in substrate concentration errors as high as 50% resulting in 7-50% error in kinetic parameter estimates. To better reflect experimental scenarios, noise containing substrate depletion data were analyzed by both the incorrect and correct integral equations. While both equations resulted in identical fits to substrate depletion data, the final estimates of Vm , Km , and R were different and Km and R estimates from the incorrect integral equation deviated substantially from the actual values. Another observation was that at R = 0, the incorrect integral equation reduced to the correct form of the Michaelis-Menten equation. We believe this combination of excellent fits to experimental data, albeit with incorrect kinetic parameter estimates, and the reduction to the Michaelis-Menten equation at R = 0 is primarily responsible for the incorrectness to go unnoticed. However, the resulting error in kinetic parameter estimates will lead to incorrect biological interpretation and we urge the use of the correct integral form presented in this study.

  9. Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle

    NASA Technical Reports Server (NTRS)

    VanEepoel, John; Thienel, Julie; Sanner, Robert M.

    2006-01-01

    In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.

  10. Alpha's standard error (ASE): an accurate and precise confidence interval estimate.

    PubMed

    Duhachek, Adam; Lacobucci, Dawn

    2004-10-01

    This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.

  11. Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Sanner, Robert M.

    2006-01-01

    Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.

  12. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    PubMed Central

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  13. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  14. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1985-01-01

    Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.

  15. Precise Estimation of Cosmological Parameters Using a More Accurate Likelihood Function

    NASA Astrophysics Data System (ADS)

    Sato, Masanori; Ichiki, Kiyotomo; Takeuchi, Tsutomu T.

    2010-12-01

    The estimation of cosmological parameters from a given data set requires a construction of a likelihood function which, in general, has a complicated functional form. We adopt a Gaussian copula and constructed a copula likelihood function for the convergence power spectrum from a weak lensing survey. We show that the parameter estimation based on the Gaussian likelihood erroneously introduces a systematic shift in the confidence region, in particular, for a parameter of the dark energy equation of state w. Thus, the copula likelihood should be used in future cosmological observations.

  16. Data Anonymization that Leads to the Most Accurate Estimates of Statistical Characteristics: Fuzzy-Motivated Approach

    PubMed Central

    Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.

    2013-01-01

    To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183

  17. Spectral estimation from laser scanner data for accurate color rendering of objects

    NASA Astrophysics Data System (ADS)

    Baribeau, Rejean

    2002-06-01

    Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.

  18. Accurate radiocarbon age estimation using "early" measurements: a new approach to reconstructing the Paleolithic absolute chronology

    NASA Astrophysics Data System (ADS)

    Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru

    2014-05-01

    This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.

  19. Accurate and unbiased estimation of power-law exponents from single-emitter blinking data.

    PubMed

    Hoogenboom, Jacob P; den Otter, Wouter K; Offerhaus, Herman L

    2006-11-28

    Single emitter blinking with a power-law distribution for the on and off times has been observed on a variety of systems including semiconductor nanocrystals, conjugated polymers, fluorescent proteins, and organic fluorophores. The origin of this behavior is still under debate. Reliable estimation of power exponents from experimental data is crucial in validating the various models under consideration. We derive a maximum likelihood estimator for power-law distributed data and analyze its accuracy as a function of data set size and power exponent both analytically and numerically. Results are compared to least-squares fitting of the double logarithmically transformed probability density. We demonstrate that least-squares fitting introduces a severe bias in the estimation result and that the maximum likelihood procedure is superior in retrieving the correct exponent and reducing the statistical error. For a data set as small as 50 data points, the error margins of the maximum likelihood estimator are already below 7%, giving the possibility to quantify blinking behavior when data set size is limited, e.g., due to photobleaching.

  20. How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?

    PubMed Central

    Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.

    2010-01-01

    We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774

  1. Accurate estimation of influenza epidemics using Google search data via ARGO.

    PubMed

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  2. Do hand-held calorimeters provide reliable and accurate estimates of resting metabolic rate?

    PubMed

    Van Loan, Marta D

    2007-12-01

    This paper provides an overview of a new technique for indirect calorimetry and the assessment of resting metabolic rate. Information from the research literature includes findings on the reliability and validity of a new hand-held indirect calorimeter as well as use in clinical and field settings. Research findings to date are of mixed results. The MedGem instrument has provided more consistent results when compared to the Douglas bag method of measuring metabolic rate. The BodyGem instrument has been shown to be less accurate when compared to standard metabolic carts. Furthermore, when the Body Gem has been used with clinical patients or with under nourished individuals the results have not been acceptable. Overall, there is not a large enough body of evidence to definitively support the use of these hand-held devices for assessment of metabolic rate in a wide variety of clinical or research environments.

  3. Accurate estimation of influenza epidemics using Google search data via ARGO

    PubMed Central

    Yang, Shihao; Santillana, Mauricio; Kou, S. C.

    2015-01-01

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  4. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  5. Accurate Estimation of Orientation Parameters of Uav Images Through Image Registration with Aerial Oblique Imagery

    NASA Astrophysics Data System (ADS)

    Onyango, F. A.; Nex, F.; Peter, M. S.; Jende, P.

    2017-05-01

    Unmanned Aerial Vehicles (UAVs) have gained popularity in acquiring geotagged, low cost and high resolution images. However, the images acquired by UAV-borne cameras often have poor georeferencing information, because of the low quality on-board Global Navigation Satellite System (GNSS) receiver. In addition, lightweight UAVs have a limited payload capacity to host a high quality on-board Inertial Measurement Unit (IMU). Thus, orientation parameters of images acquired by UAV-borne cameras may not be very accurate. Poorly georeferenced UAV images can be correctly oriented using accurately oriented airborne images capturing a similar scene by finding correspondences between the images. This is not a trivial task considering the image pairs have huge variations in scale, perspective and illumination conditions. This paper presents a procedure to successfully register UAV and aerial oblique imagery. The proposed procedure implements the use of the AKAZE interest operator for feature extraction in both images. Brute force is implemented to find putative correspondences and later on Lowe's ratio test (Lowe, 2004) is used to discard a significant number of wrong matches. In order to filter out the remaining mismatches, the putative correspondences are used in the computation of multiple homographies, which aid in the reduction of outliers significantly. In order to increase the number and improve the quality of correspondences, the impact of pre-processing the images using the Wallis filter (Wallis, 1974) is investigated. This paper presents the test results of different scenarios and the respective accuracies compared to a manual registration of the finally computed fundamental and essential matrices that encode the orientation parameters of the UAV images with respect to the aerial images.

  6. Techniques for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, Michael R.; Bland, Roger

    1999-01-01

    An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.

  7. Accurate dynamic power estimation for CMOS combinational logic circuits with real gate delay model.

    PubMed

    Fadl, Omnia S; Abu-Elyazeed, Mohamed F; Abdelhalim, Mohamed B; Amer, Hassanein H; Madian, Ahmed H

    2016-01-01

    Dynamic power estimation is essential in designing VLSI circuits where many parameters are involved but the only circuit parameter that is related to the circuit operation is the nodes' toggle rate. This paper discusses a deterministic and fast method to estimate the dynamic power consumption for CMOS combinational logic circuits using gate-level descriptions based on the Logic Pictures concept to obtain the circuit nodes' toggle rate. The delay model for the logic gates is the real-delay model. To validate the results, the method is applied to several circuits and compared against exhaustive, as well as Monte Carlo, simulations. The proposed technique was shown to save up to 96% processing time compared to exhaustive simulation.

  8. Accurate group velocity estimation for unmanned aerial vehicle-based acoustic atmospheric tomography.

    PubMed

    Rogers, Kevin J; Finn, Anthony

    2017-02-01

    Acoustic atmospheric tomography calculates temperature and wind velocity fields in a slice or volume of atmosphere based on travel time estimates between strategically located sources and receivers. The technique discussed in this paper uses the natural acoustic signature of an unmanned aerial vehicle as it overflies an array of microphones on the ground. The sound emitted by the aircraft is recorded on-board and by the ground microphones. The group velocities of the intersecting sound rays are then derived by comparing these measurements. Tomographic inversion is used to estimate the temperature and wind fields from the group velocity measurements. This paper describes a technique for deriving travel time (and hence group velocity) with an accuracy of 0.1% using these assets. This is shown to be sufficient to obtain highly plausible tomographic inversion results that correlate well with independent SODAR measurements.

  9. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  10. Multiple candidates and multiple constraints based accurate depth estimation for multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhou, Fugen; Xue, Bindang

    2017-02-01

    In this paper, we propose a depth estimation method for multi-view image sequence. To enhance the accuracy of dense matching and reduce the inaccurate matching which is produced by inaccurate feature description, we select multiple matching points to build candidate matching sets. Then we compute an optimal depth from a candidate matching set which satisfies multiple constraints (epipolar constraint, similarity constraint and depth consistency constraint). To further increase the accuracy of depth estimation, depth consistency constraint of neighbor pixels is used to filter the inaccurate matching. On this basis, in order to get more complete depth map, depth diffusion is performed by neighbor pixels' depth consistency constraint. Through experiments on the benchmark datasets for multiple view stereo, we demonstrate the superiority of proposed method over the state-of-the-art method in terms of accuracy.

  11. Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?

    NASA Astrophysics Data System (ADS)

    Ramarohetra, J.; Sultan, B.

    2012-04-01

    Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and

  12. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1991-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  13. Toward a more accurate estimate of the prevalence of hepatitis C in the United States.

    PubMed

    Edlin, Brian R; Eckhardt, Benjamin J; Shu, Marla A; Holmberg, Scott D; Swan, Tracy

    2015-11-01

    Data from the 2003-2010 National Health and Nutrition Examination Survey (NHANES) indicate that about 3.6 million people in the United States have antibodies to the hepatitis C virus, of whom 2.7 million are currently infected. NHANES, however, excludes several high-risk populations from its sampling frame, including people who are incarcerated, homeless, or hospitalized; nursing home residents; active-duty military personnel; and people living on Indian reservations. We undertook a systematic review of peer-reviewed literature and sought out unpublished presentations and data to estimate the prevalence of hepatitis C in these excluded populations and in turn improve the estimate of the number of people with hepatitis C in the United States. The available data do not support a precise result, but we estimated that 1.0 million (range 0.4 million-1.8 million) persons excluded from the NHANES sampling frame have hepatitis C virus antibody, including 500,000 incarcerated people, 220,000 homeless people, 120,000 people living on Indian reservations, and 75,000 people in hospitals. Most are men. An estimated 0.8 million (range 0.3 million-1.5 million) are currently infected. Several additional sources of underestimation, including nonresponse bias and the underrepresentation of other groups at increased risk of hepatitis C that are not excluded from the NHANES sampling frame, were not addressed in this study. The number of US residents who have been infected with hepatitis C is unknown but is probably at least 4.6 million (range 3.4 million-6.0 million), and of these, at least 3.5 million (range 2.5 million-4.7 million) are currently infected; additional sources of potential underestimation suggest that the true prevalence could well be higher. © 2015 by the American Association for the Study of Liver Diseases.

  14. Accurate distortion estimation and optimal bandwidth allocation for scalable H.264 video transmission over MIMO systems.

    PubMed

    Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan

    2009-01-01

    In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.

  15. A practical way to estimate retail tobacco sales violation rates more accurately.

    PubMed

    Levinson, Arnold H; Patnaik, Jennifer L

    2013-11-01

    U.S. states annually estimate retailer propensity to sell adolescents cigarettes, which is a violation of law, by staging a single purchase attempt among a random sample of tobacco businesses. The accuracy of single-visit estimates is unknown. We examined this question using a novel test-retest protocol. Supervised minors attempted to purchase cigarettes at all retail tobacco businesses located in 3 Colorado counties. The attempts observed federal standards: Minors were aged 15-16 years, were nonsmokers, and were free of visible tattoos and piercings, and were allowed to enter stores alone or in pairs to purchase a small item while asking for cigarettes and to show or not show genuine identification (ID, e.g., driver's license). Unlike federal standards, stores received a second purchase attempt within a few days unless minors were firmly told not to return. Separate violation rates were calculated for first visits, second visits, and either visit. Eleven minors attempted to purchase cigarettes 1,079 times from 671 retail businesses. One sixth of first visits (16.8%) resulted in a violation; the rate was similar for second visits (15.7%). Considering either visit, 25.3% of businesses failed the test. Factors predictive of violation were whether clerks asked for ID, whether the clerks closely examined IDs, and whether minors included snacks or soft drinks in cigarette purchase attempts. A test-retest protocol for estimating underage cigarette sales detected half again as many businesses in violation as the federally approved one-test protocol. Federal policy makers should consider using the test-retest protocol to increase accuracy and awareness of widespread adolescent access to cigarettes through retail businesses.

  16. A Simple and Accurate Equation for Peak Capacity Estimation in Two Dimensional Liquid Chromatography

    PubMed Central

    Li, Xiaoping; Stoll, Dwight R.; Carr, Peter W.

    2009-01-01

    Two dimensional liquid chromatography (2DLC) is a very powerful way to greatly increase the resolving power and overall peak capacity of liquid chromatography. The traditional “product rule” for peak capacity usually overestimates the true resolving power due to neglect of the often quite severe under-sampling effect and thus provides poor guidance for optimizing the separation and biases comparisons to optimized one dimensional gradient liquid chromatography. Here we derive a simple yet accurate equation for the effective two dimensional peak capacity that incorporates a correction for under-sampling of the first dimension. The results show that not only is the speed of the second dimension separation important for reducing the overall analysis time, but it plays a vital role in determining the overall peak capacity when the first dimension is under-sampled. A surprising subsidiary finding is that for relatively short 2DLC separations (much less than a couple of hours), the first dimension peak capacity is far less important than is commonly believed and need not be highly optimized, for example through use of long columns or very small particles. PMID:19053226

  17. The Remote Food Photography Method Accurately Estimates Dry Powdered Foods-The Source of Calories for Many Infants.

    PubMed

    Duhé, Abby F; Gilmore, L Anne; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M

    2016-07-01

    Infant formula is a major source of nutrition for infants, with more than half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water, making it necessary to develop methods that can accurately estimate the amount of powder used before reconstitution. Our aim was to assess the use of the Remote Food Photography Method to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. For each serving size (1 scoop, 2 scoops, 3 scoops, and 4 scoops), a set of seven test bottles and photographs were prepared as follow: recommended gram weight of powdered formula of the respective serving size by the manufacturer; three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended; and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard Remote Food Photography Method analysis procedures. The ratio estimates and the US Department of Agriculture data tables were used to generate food and nutrient information to provide the Remote Food Photography Method estimates. Equivalence testing using the two one-sided t tests approach was used to determine equivalence between the actual gram weights and the Remote Food Photography Method estimated weights for all samples, within each serving size, and within underprepared and overprepared bottles. For all bottles, the gram weights estimated by the Remote Food Photography Method were within 5% equivalence bounds with a slight underestimation of 0.05 g (90% CI -0.49 to 0.40; P<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. The maximum observed mean error was an overestimation of 1.58% of powdered formula by the Remote

  18. Accurate Estimation of Expression Levels of Homologous Genes in RNA-seq Experiments

    NASA Astrophysics Data System (ADS)

    Paşaniuc, Bogdan; Zaitlen, Noah; Halperin, Eran

    Next generation high throughput sequencing (NGS) is poised to replace array based technologies as the experiment of choice for measuring RNA expression levels. Several groups have demonstrated the power of this new approach (RNA-seq), making significant and novel contributions and simultaneously proposing methodologies for the analysis of RNA-seq data. In a typical experiment, millions of short sequences (reads) are sampled from RNA extracts and mapped back to a reference genome. The number of reads mapping to each gene is used as proxy for its corresponding RNA concentration. A significant challenge in analyzing RNA expression of homologous genes is the large fraction of the reads that map to multiple locations in the reference genome. Currently, these reads are either dropped from the analysis, or a naïve algorithm is used to estimate their underlying distribution. In this work, we present a rigorous alternative for handling the reads generated in an RNA-seq experiment within a probabilistic model for RNA-seq data; we develop maximum likelihood based methods for estimating the model parameters. In contrast to previous methods, our model takes into account the fact that the DNA of the sequenced individual is not a perfect copy of the reference sequence. We show with both simulated and real RNA-seq data that our new method improves the accuracy and power of RNA-seq experiments.

  19. Exploiting a constellation of satellite soil moisture sensors for accurate rainfall estimation

    NASA Astrophysics Data System (ADS)

    Tarpanelli, A.; Massari, C.; Ciabatta, L.; Filippucci, P.; Amarnath, G.; Brocca, L.

    2017-10-01

    A merging procedure is applied to five daily rainfall estimates achieved via SM2RAIN applied to the soil moisture products obtained by the Advanced SCATterometer, the Advanced Microwave Scanning Radiometer 2, the Soil Moisture Active and Passive mission, the Soil Moisture and Ocean Salinity mission and backscattering observations of RapidScat. The precipitation estimates are evaluated against dense ground networks in the period ranging from April to December, 2015, in India and in Italy, at 0.25°/daily spatial/temporal resolution. The merged product derived by combining the different SM2RAIN rainfall products shows better results in term of statistical and categorical metrics with respect to the use of the single products. A good agreement with reference to ground observations is obtained, with median correlations equal to 0.65 and 0.77 in India and in Italy, respectively. The merged dataset is found to slightly outperform those of the IMERG product of the Global Precipitation Measurement mission underlying the large potential of the proposed approach.

  20. Accurate estimation of expression levels of homologous genes in RNA-seq experiments.

    PubMed

    Paşaniuc, Bogdan; Zaitlen, Noah; Halperin, Eran

    2011-03-01

    Abstract Next generation high-throughput sequencing (NGS) is poised to replace array-based technologies as the experiment of choice for measuring RNA expression levels. Several groups have demonstrated the power of this new approach (RNA-seq), making significant and novel contributions and simultaneously proposing methodologies for the analysis of RNA-seq data. In a typical experiment, millions of short sequences (reads) are sampled from RNA extracts and mapped back to a reference genome. The number of reads mapping to each gene is used as proxy for its corresponding RNA concentration. A significant challenge in analyzing RNA expression of homologous genes is the large fraction of the reads that map to multiple locations in the reference genome. Currently, these reads are either dropped from the analysis, or a naive algorithm is used to estimate their underlying distribution. In this work, we present a rigorous alternative for handling the reads generated in an RNA-seq experiment within a probabilistic model for RNA-seq data; we develop maximum likelihood-based methods for estimating the model parameters. In contrast to previous methods, our model takes into account the fact that the DNA of the sequenced individual is not a perfect copy of the reference sequence. We show with both simulated and real RNA-seq data that our new method improves the accuracy and power of RNA-seq experiments.

  1. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    NASA Astrophysics Data System (ADS)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  2. Assessing estimates of radiative forcing for solar geoengineering starts with accurate aerosol radiative properties

    NASA Astrophysics Data System (ADS)

    Dykema, J. A.; Keith, D.; Keutsch, F. N.

    2016-12-01

    The deliberate modification of Earth's albedo as a complement to mitigation in order to slow climate change brings with it a range of risks. A range of different approaches have been studied, including the injection of aerosol particles into the stratosphere to decrease solar energy input into the climate system. Key side effects from this approach include ozone loss and radiative heating. Both of these side effects may produce dynamical changes with further consequences for stratospheric and tropospheric climate. Studies of past volcanic eruptions suggest that sulfate aerosol injection may be capable of achieving a compensating radiative forcing of -1 W m-2 or more. It is also expected that such injection of sulfate aerosols will result in loss of stratospheric ozone and of significant infrared heating. The problems resulting from sulfate aerosols intended have motivated the investigation of alternative materials, including high refractive index solid materials. High refractive index materials have the potential to scatter more efficiently per unit mass, leading to a reduction in surface area for heterogeneous chemistry, and, depending on details of absorption, less radiative heating. Fundamentally, assessing these trade-offs requires accurate knowledge of the complex refractive index of materials being considered over the full range of wavelengths relevant to atmospheric radiative transfer, that is, from ultraviolet to far-infrared. Our survey of the relevant literature finds that such measurements are not available for all materials of interest at all wavelengths. We utilize a method developed in astrophysics to fill in spectral gaps, and find that some materials may heat the stratosphere substantially more than was found in previous work. Stratospheric heating can warm the tropical tropopause layer, increasing the flux of water vapor into the stratosphere, with further consequences for atmospheric composition and radiative forcing. We analyze this consequence

  3. Ensemble predictive model for more accurate soil organic carbon spectroscopic estimation

    NASA Astrophysics Data System (ADS)

    Vašát, Radim; Kodešová, Radka; Borůvka, Luboš

    2017-07-01

    A myriad of signal pre-processing strategies and multivariate calibration techniques has been explored in attempt to improve the spectroscopic prediction of soil organic carbon (SOC) over the last few decades. Therefore, to come up with a novel, more powerful, and accurate predictive approach to beat the rank becomes a challenging task. However, there may be a way, so that combine several individual predictions into a single final one (according to ensemble learning theory). As this approach performs best when combining in nature different predictive algorithms that are calibrated with structurally different predictor variables, we tested predictors of two different kinds: 1) reflectance values (or transforms) at each wavelength and 2) absorption feature parameters. Consequently we applied four different calibration techniques, two per each type of predictors: a) partial least squares regression and support vector machines for type 1, and b) multiple linear regression and random forest for type 2. The weights to be assigned to individual predictions within the ensemble model (constructed as a weighted average) were determined by an automated procedure that ensured the best solution among all possible was selected. The approach was tested at soil samples taken from surface horizon of four sites differing in the prevailing soil units. By employing the ensemble predictive model the prediction accuracy of SOC improved at all four sites. The coefficient of determination in cross-validation (R2cv) increased from 0.849, 0.611, 0.811 and 0.644 (the best individual predictions) to 0.864, 0.650, 0.824 and 0.698 for Site 1, 2, 3 and 4, respectively. Generally, the ensemble model affected the final prediction so that the maximal deviations of predicted vs. observed values of the individual predictions were reduced, and thus the correlation cloud became thinner as desired.

  4. Epoch length to accurately estimate the amplitude of interference EMG is likely the result of unavoidable amplitude cancellation

    PubMed Central

    Keenan, Kevin G.; Valero-Cuevas, Francisco J.

    2008-01-01

    Researchers and clinicians routinely rely on interference electromyograms (EMGs) to estimate muscle forces and command signals in the neuromuscular system (e.g., amplitude, timing, and frequency content). The amplitude cancellation intrinsic to interference EMG, however, raises important questions about how to optimize these estimates. For example, what should the length of the epoch (time window) be to average an EMG signal to reliably estimate muscle forces and command signals? Shorter epochs are most practical, and significant reductions in epoch have been reported with high-pass filtering and whitening. Given that this processing attenuates power at frequencies of interest (< 250 Hz), however, it is unclear how it improves the extraction of physiologically-relevant information. We examined the influence of amplitude cancellation and high-pass filtering on the epoch necessary to accurately estimate the “true” average EMG amplitude calculated from a 28 s EMG trace (EMGref) during simulated constant isometric conditions. Monte Carlo iterations of a motor-unit model simulating 28 s of surface EMG produced 245 simulations under 2 conditions: with and without amplitude cancellation. For each simulation, we calculated the epoch necessary to generate average full-wave rectified EMG amplitudes that settled within 5% of EMGref. For the no-cancellation EMG, the necessary epochs were short (e.g., < 100 ms). For the more realistic interference EMG (i.e., cancellation condition), epochs shortened dramatically after using high-pass filter cutoffs above 250 Hz, producing epochs short enough to be practical (i.e., < 500 ms). We conclude that the need to use long epochs to accurately estimate EMG amplitude is likely the result of unavoidable amplitude cancellation, which helps to clarify why high-pass filtering (> 250 Hz) improves EMG estimates. PMID:19081815

  5. An Energy-Efficient Strategy for Accurate Distance Estimation in Wireless Sensor Networks

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2012-01-01

    In line with recent research efforts made to conceive energy saving protocols and algorithms and power sensitive network architectures, in this paper we propose a transmission strategy to minimize the energy consumption in a sensor network when using a localization technique based on the measurement of the strength (RSS) or the time of arrival (TOA) of the received signal. In particular, we find the transmission power and the packet transmission rate that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the RSS or TOA measurements. We also propose some corrections to these theoretical results to take into account the effects of shadowing and packet loss in the propagation channel. The proposed strategy is shown to be effective in realistic scenarios providing energy savings with respect to other transmission strategies, and also guaranteeing a given accuracy in the distance estimations, which will serve to guarantee a desired accuracy in the localization result. PMID:23202218

  6. [Research on maize multispectral image accurate segmentation and chlorophyll index estimation].

    PubMed

    Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e

    2015-01-01

    In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray

  7. Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir Welding

    NASA Astrophysics Data System (ADS)

    Guerdoux, Simon; Fourment, Lionel

    2007-05-01

    An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.

  8. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  9. The challenges of accurately estimating time of long bone injury in children.

    PubMed

    Pickett, Tracy A

    2015-07-01

    The ability to determine the time an injury occurred can be of crucial significance in forensic medicine and holds special relevance to the investigation of child abuse. However, dating paediatric long bone injury, including fractures, is nuanced by complexities specific to the paediatric population. These challenges include the ability to identify bone injury in a growing or only partially-calcified skeleton, different injury patterns seen within the spectrum of the paediatric population, the effects of bone growth on healing as a separate entity from injury, differential healing rates seen at different ages, and the relative scarcity of information regarding healing rates in children, especially the very young. The challenges posed by these factors are compounded by a lack of consistency in defining and categorizing healing parameters. This paper sets out the primary limitations of existing knowledge regarding estimating timing of paediatric bone injury. Consideration and understanding of the multitude of factors affecting bone injury and healing in children will assist those providing opinion in the medical-legal forum.

  10. Is bioelectrical impedance spectroscopy accurate in estimating total body water and its compartments in elite athletes?

    PubMed

    Matias, Catarina N; Santos, Diana A; Gonçalves, Ezequiel M; Fields, David A; Sardinha, Luís B; Silva, Analiza M

    2013-03-01

    Bioelectrical impedance spectroscopy (BIS) provides an affordable assessment of the body's various water compartments: total body water (TBW), extracellular water (ECW) and intracellular water (ICW). However, little is known of its validity in athletes. To validate TBW, ECW and ICW by BIS in elite male and female Portuguese athletes using dilution techniques (i.e. deuterium and bromide dilution) as criterion methods. Sixty-two athletes (18.5 ± 4.1 years) had TBW, ECW and ICW assessed by BIS during their respective pre-season. BIS significantly under-estimated TBW by 1.0 ± 1.7 kg and ICW by 0.9 ± 1.9 kg in relation to the criterion methods, with no differences observed for ECW. The values for the concordance correlation coefficient were 0.98 for TBW and ECW and 0.95 for ICW. Bland-Altman analyses revealed no bias for the various water compartments, with the 95% confidence intervals ranging from - 4.8 to 2.6 kg for TBW, - 1.5 to 1.6 kg for ECW and - 4.5 to 2.7 kg for ICW. Overall, these findings demonstrate the validity of BIS as a valid tool in the assessment of TBW and its compartments in both male and female athletes.

  11. Calculating the evolutionary rates of different genes: a fast, accurate estimator with applications to maximum likelihood phylogenetic analysis.

    PubMed

    Bevan, Rachel B; Lang, B Franz; Bryant, David

    2005-12-01

    In phylogenetic analyses with combined multigene or multiprotein data sets, accounting for differing evolutionary dynamics at different loci is essential for accurate tree prediction. Existing maximum likelihood (ML) and Bayesian approaches are computationally intensive. We present an alternative approach that is orders of magnitude faster. The method, Distance Rates (DistR), estimates rates based upon distances derived from gene/protein sequence data. Simulation studies indicate that this technique is accurate compared with other methods and robust to missing sequence data. The DistR method was applied to a fungal mitochondrial data set, and the rate estimates compared well to those obtained using existing ML and Bayesian approaches. Inclusion of the protein rates estimated from the DistR method into the ML calculation of trees as a branch length multiplier resulted in a significantly improved fit as measured by the Akaike Information Criterion (AIC). Furthermore, bootstrap support for the ML topology was significantly greater when protein rates were used, and some evident errors in the concatenated ML tree topology (i.e., without protein rates) were corrected. [Bayesian credible intervals; DistR method; multigene phylogeny; PHYML; rate heterogeneity.].

  12. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  13. Aggregate versus individual-level sexual behavior assessment: how much detail is needed to accurately estimate HIV/STI risk?

    PubMed

    Pinkerton, Steven D; Galletly, Carol L; McAuliffe, Timothy L; DiFranceisco, Wayne; Raymond, H Fisher; Chesson, Harrell W

    2010-02-01

    The sexual behaviors of HIV/sexually transmitted infection (STI) prevention intervention participants can be assessed on a partner-by-partner basis: in aggregate (i.e., total numbers of sex acts, collapsed across partners) or using a combination of these two methods (e.g., assessing five partners in detail and any remaining partners in aggregate). There is a natural trade-off between the level of sexual behavior detail and the precision of HIV/STI acquisition risk estimates. The results of this study indicate that relatively simple aggregate data collection techniques suffice to adequately estimate HIV risk. For highly infectious STIs, in contrast, accurate STI risk assessment requires more intensive partner-by-partner methods.

  14. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    PubMed

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  15. Linear-In-The-Parameters Oblique Least Squares (LOLS) Provides More Accurate Estimates of Density-Dependent Survival

    PubMed Central

    Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure

    2016-01-01

    Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048

  16. [Estimation of the excess death associated with influenza pandemics and epidemics in Japan after world war II: relation with pandemics and the vaccination system].

    PubMed

    Ohmi, Kenichi; Marui, Eiji

    2011-10-01

    To estimate the excess death associated with influenza pandemics and epidemics in Japan after World War II, and to reexamine the relationship between the excess death and the vaccination system in Japan. Using the Japanese national vital statistics data for 1952-2009, we specified months with influenza epidemics, monthly mortality rates and the seasonal index for 1952-74 and for 1975-2009. Then we calculated excess deaths of each month from the observed number of deaths and the 95% range of expected deaths. Lastly we calculated age-adjusted excess death rates using the 1985 model population of Japan. The total number of excess deaths for 1952-2009 was 687,279 (95% range, 384,149-970,468), 12,058 (95% range, 6,739-17,026) per year. The total number of excess deaths in 6 pandemic years of 1957-58, 58-59, 1968-69, 69-70, 77-78 and 78-79, was 95,904, while that in 51 'non-pandemic' years was 591,376, 6.17 fold larger than pandemic years. The average number of excess deaths for pandemic years was 23,976, nearly equal to that for 'non-pandemic' years, 23,655. At the beginning of pandemics, 1957-58, 1968-69, 1969-70, the proportion of those aged <65 years in excess deaths rose compared with 'non-pandemic' years. In the 1970s and 1980s, when the vaccination program for schoolchildren was mandatory in Japan on the basis of the "Fukumi thesis", age-adjusted average excess mortality rates were relatively low, with an average of 6.17 per hundred thousand. In the 1990s, when group vaccination was discontinued, age-adjusted excess mortality rose up to 9.42, only to drop again to 2.04 when influenza vaccination was made available to the elderly in the 2000s, suggesting that the vaccination of Japanese children prevented excess deaths from influenza pandemics and epidemics. Moreover, in the age group under 65, average excess mortality rates were low in the 1970s and 1980s rather than in the 2000s, which shows that the "Social Defensive" schoolchildren vaccination program in the

  17. Accurate Estimation of Fungal Diversity and Abundance through Improved Lineage-Specific Primers Optimized for Illumina Amplicon Sequencing

    PubMed Central

    Walters, William A.; Lennon, Niall J.; Bochicchio, James; Krohn, Andrew; Pennanen, Taina

    2016-01-01

    ABSTRACT While high-throughput sequencing methods are revolutionizing fungal ecology, recovering accurate estimates of species richness and abundance has proven elusive. We sought to design internal transcribed spacer (ITS) primers and an Illumina protocol that would maximize coverage of the kingdom Fungi while minimizing nontarget eukaryotes. We inspected alignments of the 5.8S and large subunit (LSU) ribosomal genes and evaluated potential primers using PrimerProspector. We tested the resulting primers using tiered-abundance mock communities and five previously characterized soil samples. We recovered operational taxonomic units (OTUs) belonging to all 8 members in both mock communities, despite DNA abundances spanning 3 orders of magnitude. The expected and observed read counts were strongly correlated (r = 0.94 to 0.97). However, several taxa were consistently over- or underrepresented, likely due to variation in rRNA gene copy numbers. The Illumina data resulted in clustering of soil samples identical to that obtained with Sanger sequence clone library data using different primers. Furthermore, the two methods produced distance matrices with a Mantel correlation of 0.92. Nonfungal sequences comprised less than 0.5% of the soil data set, with most attributable to vascular plants. Our results suggest that high-throughput methods can produce fairly accurate estimates of fungal abundances in complex communities. Further improvements might be achieved through corrections for rRNA copy number and utilization of standardized mock communities. IMPORTANCE Fungi play numerous important roles in the environment. Improvements in sequencing methods are providing revolutionary insights into fungal biodiversity, yet accurate estimates of the number of fungal species (i.e., richness) and their relative abundances in an environmental sample (e.g., soil, roots, water, etc.) remain difficult to obtain. We present improved methods for high-throughput Illumina sequencing of the

  18. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    PubMed

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  19. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    PubMed Central

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  20. Accurate estimation of entropy in very short physiological time series: the problem of atrial fibrillation detection in implanted ventricular devices.

    PubMed

    Lake, Douglas E; Moorman, J Randall

    2011-01-01

    Entropy estimation is useful but difficult in short time series. For example, automated detection of atrial fibrillation (AF) in very short heart beat interval time series would be useful in patients with cardiac implantable electronic devices that record only from the ventricle. Such devices require efficient algorithms, and the clinical situation demands accuracy. Toward these ends, we optimized the sample entropy measure, which reports the probability that short templates will match with others within the series. We developed general methods for the rational selection of the template length m and the tolerance matching r. The major innovation was to allow r to vary so that sufficient matches are found for confident entropy estimation, with conversion of the final probability to a density by dividing by the matching region volume, 2r(m). The optimized sample entropy estimate and the mean heart beat interval each contributed to accurate detection of AF in as few as 12 heartbeats. The final algorithm, called the coefficient of sample entropy (COSEn), was developed using the canonical MIT-BIH database and validated in a new and much larger set of consecutive Holter monitor recordings from the University of Virginia. In patients over the age of 40 yr old, COSEn has high degrees of accuracy in distinguishing AF from normal sinus rhythm in 12-beat calculations performed hourly. The most common errors are atrial or ventricular ectopy, which increase entropy despite sinus rhythm, and atrial flutter, which can have low or high entropy states depending on dynamics of atrioventricular conduction.

  1. Accurate Estimation of Effective Population Size in the Korean Dairy Cattle Based on Linkage Disequilibrium Corrected by Genomic Relationship Matrix

    PubMed Central

    Shin, Dong-Hyun; Cho, Kwang-Hyun; Park, Kyoung-Do; Lee, Hyun-Jeong; Kim, Heebal

    2013-01-01

    Linkage disequilibrium between markers or genetic variants underlying interesting traits affects many genomic methodologies. In many genomic methodologies, the effective population size (Ne) is important to assess the genetic diversity of animal populations. In this study, dairy cattle were genotyped using the Illumina BovineHD Genotyping BeadChips for over 777,000 SNPs located across all autosomes, mitochondria and sex chromosomes, and 70,000 autosomal SNPs were selected randomly for the final analysis. We characterized more accurate linkage disequilibrium in a sample of 96 dairy cattle producing milk in Korea. Estimated linkage disequilibrium was relatively high between closely linked markers (>0.6 at 10 kb) and decreased with increasing distance. Using formulae that related the expected linkage disequilibrium to Ne, and assuming a constant actual population size, Ne was estimated to be approximately 122 in this population. Historical Ne, calculated assuming linear population growth, was suggestive of a rapid increase Ne over the past 10 generations, and increased slowly thereafter. Additionally, we corrected the genomic relationship structure per chromosome in calculating r2 and estimated Ne. The observed Ne based on r2 corrected by genomics relationship structure can be rationalized using current knowledge of the history of the dairy cattle breeds producing milk in Korea. PMID:25049757

  2. Accurate estimation of effective population size in the korean dairy cattle based on linkage disequilibrium corrected by genomic relationship matrix.

    PubMed

    Shin, Dong-Hyun; Cho, Kwang-Hyun; Park, Kyoung-Do; Lee, Hyun-Jeong; Kim, Heebal

    2013-12-01

    Linkage disequilibrium between markers or genetic variants underlying interesting traits affects many genomic methodologies. In many genomic methodologies, the effective population size (Ne) is important to assess the genetic diversity of animal populations. In this study, dairy cattle were genotyped using the Illumina BovineHD Genotyping BeadChips for over 777,000 SNPs located across all autosomes, mitochondria and sex chromosomes, and 70,000 autosomal SNPs were selected randomly for the final analysis. We characterized more accurate linkage disequilibrium in a sample of 96 dairy cattle producing milk in Korea. Estimated linkage disequilibrium was relatively high between closely linked markers (>0.6 at 10 kb) and decreased with increasing distance. Using formulae that related the expected linkage disequilibrium to Ne, and assuming a constant actual population size, Ne was estimated to be approximately 122 in this population. Historical Ne, calculated assuming linear population growth, was suggestive of a rapid increase Ne over the past 10 generations, and increased slowly thereafter. Additionally, we corrected the genomic relationship structure per chromosome in calculating r(2) and estimated Ne. The observed Ne based on r(2) corrected by genomics relationship structure can be rationalized using current knowledge of the history of the dairy cattle breeds producing milk in Korea.

  3. Impact of interfacial high-density water layer on accurate estimation of adsorption free energy by Jarzynski's equality

    NASA Astrophysics Data System (ADS)

    Zhang, Zhisen; Wu, Tao; Wang, Qi; Pan, Haihua; Tang, Ruikang

    2014-01-01

    The interactions between proteins/peptides and materials are crucial to research and development in many biomedical engineering fields. The energetics of such interactions are key in the evaluation of new proteins/peptides and materials. Much research has recently focused on the quality of free energy profiles by Jarzynski's equality, a widely used equation in biosystems. In the present work, considerable discrepancies were observed between the results obtained by Jarzynski's equality and those derived by umbrella sampling in biomaterial-water model systems. Detailed analyses confirm that such discrepancies turn up only when the target molecule moves in the high-density water layer on a material surface. Then a hybrid scheme was adopted based on this observation. The agreement between the results of the hybrid scheme and umbrella sampling confirms the former observation, which indicates an approach to a fast and accurate estimation of adsorption free energy for large biomaterial interfacial systems.

  4. Impact of interfacial high-density water layer on accurate estimation of adsorption free energy by Jarzynski's equality.

    PubMed

    Zhang, Zhisen; Wu, Tao; Wang, Qi; Pan, Haihua; Tang, Ruikang

    2014-01-21

    The interactions between proteins/peptides and materials are crucial to research and development in many biomedical engineering fields. The energetics of such interactions are key in the evaluation of new proteins/peptides and materials. Much research has recently focused on the quality of free energy profiles by Jarzynski's equality, a widely used equation in biosystems. In the present work, considerable discrepancies were observed between the results obtained by Jarzynski's equality and those derived by umbrella sampling in biomaterial-water model systems. Detailed analyses confirm that such discrepancies turn up only when the target molecule moves in the high-density water layer on a material surface. Then a hybrid scheme was adopted based on this observation. The agreement between the results of the hybrid scheme and umbrella sampling confirms the former observation, which indicates an approach to a fast and accurate estimation of adsorption free energy for large biomaterial interfacial systems.

  5. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    PubMed Central

    2011-01-01

    Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645

  6. Finger counting method is more accurate than age-based weight estimation formulae in estimating the weight of Hong Kong children presenting to the emergency department.

    PubMed

    So, Jerome Lt; Chow, Eric Pf; Cattermole, Giles N; Graham, Colin A; Rainer, Timothy H

    2016-12-01

    The aim of the present study was to evaluate the finger counting method and compare its performance with four commonly used age-based weight estimation formulae in children aged 1-9 years presenting to the ED in Hong Kong. A cross-sectional, observational study of children aged 1-9 years who presented to the ED of a tertiary referral hospital in Hong Kong over a 6 month period was conducted. Actual weight was compared with estimated weight using the finger counting method and four commonly used age-based weight estimation formulae. Bland-Altman analysis was performed to evaluate the degree of agreement in which the mean percentage difference (MPD) and 95% limits of agreement (LOA) were calculated. Root mean squared error (RMSE) and proportions of weight estimates within 10%, 15% and 20% of actual weight were determined. A total of 4178 children were included. The finger counting method was the most accurate method (MPD 0.1%; 95% LOA -34.0% to 34.2%). The original Advanced Paediatric Life Support (APLS) formula (MPD -7.0%; 95% LOA -38.4% to 24.3%) and the updated APLS formula (MPD -0.4%; 95% LOA -38.5% to 37.8%) underestimated weight whereas the Luscombe formula (MPD 7.2%; 95% LOA -31.8% to 46.2%) and the Best Guess formula (MPD 10.6%; 95% LOA -27.3% to 48.4%) overestimated weight. The finger counting method had smallest RMSE of 4.06 kg and estimated the largest proportion of children within 10%, 15% and 20% of actual weight. The finger counting method outperforms the commonly used age-based weight estimation formulae in children aged 1-9 years presenting to the ED in Hong Kong. © 2016 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  7. Refined Estimate of Total Variation Enables a More Accurate Parameter and Uncertainty Estimation, as Well as a new Model Selection Procedure

    NASA Astrophysics Data System (ADS)

    de Brauwere, A.; de Ridder, F.; Elskens, M.; Schoukens, J.; Pintelon, R.; Baeyens, W.

    2004-12-01

    In almost every field of science and engineering nonlinear equations are increasingly used to model experimental measurements. In this context, we address the problem of accurately estimating the model parameters and their uncertainty. For that, it is essential to correctly take into account the stochastic measurement uncertainties. For instance, if the measurements are subject to individual errors, the parameters are often estimated using a Weighted Least Squares (WLS) method. For estimating the parameter uncertainties, a linearized expression for the covariance matrix exists. Yet, both methods generally assume that the errors on the independent variable(s), also called "input", are negligible, which is often not true in reality. We propose a refinement of the abovementioned parameter and uncertainty estimation methods, which generalises their applicability to cases where input noise is not negligible. An advantage of this method is that the input noise is transformed into output noise, which allows to keep the traditional WLS formalism (and software). The refined methods are evaluated and compared to the original procedures. The results reveal an improved consistency of the refined WLS estimator compared to the original one. An additional advantage of the refined WLS cost function is that its residual value can be interpreted as a sample from a chi square distribution. This property is useful because it enables an internal quality control of the results. In addition, this property allows an objective procedure to select the most appropriate model for describing the data under study, when several competing models are available. The parameter uncertainty estimation is also clearly improved by applying the refined method. By neglecting the effect of the input noise, a (potentially) important origin of the parameter variation is simply ignored. Therefore, without the refinement, the parameter uncertainties are systematically underestimated. Using the refined method

  8. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  9. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  10. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    PubMed Central

    2012-01-01

    Background Routine monitoring of patients on antiretroviral therapy (ART) is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among paper-based facilities, an

  11. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus)?

    PubMed Central

    Palmstrom, Christin R.

    2015-01-01

    There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858

  12. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  13. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    PubMed

    Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat

    2017-05-01

    Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  14. Estimation of the Prevalence of Inadequate and Excessive Iodine Intakes in School-Age Children from the Adjusted Distribution of Urinary Iodine Concentrations from Population Surveys.

    PubMed

    Zimmermann, Michael B; Hussein, Izzeldin; Al Ghannami, Samia; El Badawi, Salah; Al Hamad, Nawal M; Abbas Hajj, Basima; Al-Thani, Mohamed; Al-Thani, Al Anoud; Winichagoon, Pattanee; Pongcharoen, Tippawan; van der Haar, Frits; Qing-Zhen, Jia; Dold, Susanne; Andersson, Maria; Carriquiry, Alicia L

    2016-06-01

    The urinary iodine concentration (UIC), a biomarker of iodine intake, is used to assess population iodine status by deriving the median UIC, but this does not quantify the percentage of individuals with habitually deficient or excess iodine intakes. Individuals with a UIC <100 μg/L or ≥300 μg/L are often incorrectly classified as having deficient or excess intakes, but this likely overestimates the true prevalence. Our aim was to estimate the prevalence of inadequate and excess iodine intake in children (aged 4-14 y) with the distribution of spot UIC from iodine surveys. With the use of data from national iodine studies (Kuwait, Oman, Thailand, and Qatar) and a regional study (China) in children (n = 6117) in which a repeat UIC was obtained in a subsample (n = 1060), we calculated daily iodine intake from spot UICs from the relation between body weight and 24-h urine volume and within-person variation by using the repeat UIC. We also estimated pooled external within-person proportion of total variances by region. We used within-person variance proportions to obtain the prevalence of inadequate or excess usual iodine intake by using the Estimated Average Requirement (EAR)/Tolerable Upper Intake Level (UL) cutoff method. Median UICs in Kuwait, Oman, China, Thailand, and Qatar were 132, 192, 199, 262, and 333 μg/L, respectively. Internal within-person variance proportions ranged from 25.0% to 80.0%, and pooled regional external estimates ranged from 40.4% to 77.5%. The prevalence of inadequate and excess intakes as defined by the adjusted EAR/UL cutoff method was ∼45-99% lower than those defined by a spot UIC <100 μg/L or ≥300 μg/L (P < 0.01). Applying the EAR/UL cutoff method to iodine intakes from adjusted UIC distributions is a promising approach to estimate the number of individuals with deficient or excess iodine intakes. © 2016 American Society for Nutrition.

  15. Challenges associated with drunk driving measurement: combining police and self-reported data to estimate an accurate prevalence in Brazil.

    PubMed

    Sousa, Tanara; Lunnen, Jeffrey C; Gonçalves, Veralice; Schmitz, Aurinez; Pasa, Graciela; Bastos, Tamires; Sripad, Pooja; Chandran, Aruna; Pechansky, Flavio

    2013-12-01

    Drunk driving is an important risk factor for road traffic crashes, injuries and deaths. After June 2008, all drivers in Brazil were subject to a "Zero Tolerance Law" with a set breath alcohol concentration of 0.1 mg/L of air. However, a loophole in this law enabled drivers to refuse breath or blood alcohol testing as it may self-incriminate. The reported prevalence of drunk driving is therefore likely a gross underestimate in many cities. To compare the prevalence of drunk driving gathered from police reports to the prevalence gathered from self-reported questionnaires administered at police sobriety roadblocks in two Brazilian capital cities, and to estimate a more accurate prevalence of drunk driving utilizing three correction techniques based upon information from those questionnaires. In August 2011 and January-February 2012, researchers from the Centre for Drug and Alcohol Research at the Universidade Federal do Rio Grande do Sul administered a roadside interview on drunk driving practices to 805 voluntary participants in the Brazilian capital cities of Palmas and Teresina. Three techniques which include measures such as the number of persons reporting alcohol consumption in the last six hours but who had refused breath testing were used to estimate the prevalence of drunk driving. The prevalence of persons testing positive for alcohol on their breath was 8.8% and 5.0% in Palmas and Teresina respectively. Utilizing a correction technique we calculated that a more accurate prevalence in these sites may be as high as 28.2% and 28.7%. In both cities, about 60% of drivers who self-reported having drank within six hours of being stopped by the police either refused to perform breathalyser testing; fled the sobriety roadblock; or were not offered the test, compared to about 30% of drivers that said they had not been drinking. Despite the reduction of the legal limit for drunk driving stipulated by the "Zero Tolerance Law," loopholes in the legislation permit many

  16. Estimating the gas transfer velocity: a prerequisite for more accurate and higher resolution GHG fluxes (lower Aare River, Switzerland)

    NASA Astrophysics Data System (ADS)

    Sollberger, S.; Perez, K.; Schubert, C. J.; Eugster, W.; Wehrli, B.; Del Sontro, T.

    2013-12-01

    Currently, carbon dioxide (CO2) and methane (CH4) emissions from lakes, reservoirs and rivers are readily investigated due to the global warming potential of those gases and the role these inland waters play in the carbon cycle. However, there is a lack of high spatiotemporally-resolved emission estimates, and how to accurately assess the gas transfer velocity (K) remains controversial. In anthropogenically-impacted systems where run-of-river reservoirs disrupt the flow of sediments by increasing the erosion and load accumulation patterns, the resulting production of carbonic greenhouse gases (GH-C) is likely to be enhanced. The GH-C flux is thus counteracting the terrestrial carbon sink in these environments that act as net carbon emitters. The aim of this project was to determine the GH-C emissions from a medium-sized river heavily impacted by several impoundments and channelization through a densely-populated region of Switzerland. Estimating gas emission from rivers is not trivial and recently several models have been put forth to do so; therefore a second goal of this project was to compare the river emission models available with direct measurements. Finally, we further validated the modeled fluxes by using a combined approach with water sampling, chamber measurements, and highly temporal GH-C monitoring using an equilibrator. We conducted monthly surveys along the 120 km of the lower Aare River where we sampled for dissolved CH4 (';manual' sampling) at a 5-km sampling resolution, and measured gas emissions directly with chambers over a 35 km section. We calculated fluxes (F) via the boundary layer equation (F=K×(Cw-Ceq)) that uses the water-air GH-C concentration (C) gradient (Cw-Ceq) and K, which is the most sensitive parameter. K was estimated using 11 different models found in the literature with varying dependencies on: river hydrology (n=7), wind (2), heat exchange (1), and river width (1). We found that chamber fluxes were always higher than boundary

  17. A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates.

    PubMed

    Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien

    2016-08-15

    The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.

  18. A relationship to estimate the excess entropy of mixing: Application in silicate solid solutions and binary alloys

    PubMed Central

    Benisek, Artur; Dachs, Edgar

    2012-01-01

    The paper presents new calorimetric data on the excess heat capacity and vibrational entropy of mixing of Pt–Rh and Ag–Pd alloys. The results of the latter alloy are compared to those obtained by calculations using the density functional theory. The extent of the excess vibrational entropy of mixing of these binaries and of some already investigated binary mixtures is related to the differences of the end-member volumes and the end-member bulk moduli. These quantities are used to roughly represent the changes of the bond length and stiffness in the substituted and substituent polyhedra due to compositional changes, which are assumed to be the important factors for the non-ideal vibrational behaviour in solid solutions. PMID:23471516

  19. A relationship to estimate the excess entropy of mixing: Application in silicate solid solutions and binary alloys.

    PubMed

    Benisek, Artur; Dachs, Edgar

    2012-06-25

    The paper presents new calorimetric data on the excess heat capacity and vibrational entropy of mixing of Pt-Rh and Ag-Pd alloys. The results of the latter alloy are compared to those obtained by calculations using the density functional theory. The extent of the excess vibrational entropy of mixing of these binaries and of some already investigated binary mixtures is related to the differences of the end-member volumes and the end-member bulk moduli. These quantities are used to roughly represent the changes of the bond length and stiffness in the substituted and substituent polyhedra due to compositional changes, which are assumed to be the important factors for the non-ideal vibrational behaviour in solid solutions.

  20. The new Asian modified CKD-EPI equation leads to more accurate GFR estimation in Chinese patients with CKD.

    PubMed

    Wang, Jinghua; Xie, Peng; Huang, Jian-Min; Qu, Yan; Zhang, Fang; Wei, Ling-Ge; Fu, Peng; Huang, Xiao-Jie

    2016-12-01

    To verify whether the new Asian modified CKD-EPI equation improved the performance of original one in determining GFR in Chinese patients with CKD. A well-designed paired cohort was set up. Measured GFR (mGFR) was the result of (99m)Tc-diethylene triamine pentaacetic acid ((99m)Tc-DTPA) dual plasma sample clearance method. The estimated GFR (eGFR) was the result of the CKD-EPI equation (eGFR1) and the new Asian modified CKD-EPI equation (eGFR2). The comparisons were performed to evaluate the superiority of the eGFR2 in bias, accuracy, precision, concordance correlation coefficient and the slope of regression equation and measure agreement. A total of 195 patients were enrolled and analyzed. The new Asian modified CKD-EPI equation improved the performance of the original one in bias and accuracy. However, nearly identical performance was observed in the respect of precision, concordance correlation coefficient, slope of eGFR against mGFR and 95 % limit of agreement. In the subgroup of GFR < 60 mL min(-1)/1.73 m(2), the bias of eGFR1 was less than eGFR2 but they have comparable precision and accuracy. In the subgroup of GFR > 60 mL min(-1)/1.73 m(2), eGFR2 performed better than eGFR1 in terms of bias and accuracy. The new Asian modified CKD-EPI equation can lead to more accurate GFR estimation in Chinese patients with CKD in general practice, especially in the higher GFR group.

  1. Development of a new, robust and accurate, spectroscopic metric for scatterer size estimation in optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Pitris, Costas

    2016-03-01

    The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.

  2. A method for simple and accurate estimation of fog deposition in a mountain forest using a meteorological model

    NASA Astrophysics Data System (ADS)

    Katata, Genki; Kajino, Mizuo; Hiraki, Takatoshi; Aikawa, Masahide; Kobayashi, Tomiki; Nagai, Haruyasu

    2011-10-01

    To apply a meteorological model to investigate fog occurrence, acidification and deposition in mountain forests, the meteorological model WRF was modified to calculate fog deposition accurately by the simple linear function of fog deposition onto vegetation derived from numerical experiments using the detailed multilayer atmosphere-vegetation-soil model (SOLVEG). The modified version of WRF that includes fog deposition (fog-WRF) was tested in a mountain forest on Mt. Rokko in Japan. fog-WRF provided a distinctly better prediction of liquid water content of fog (LWC) than the original version of WRF. It also successfully simulated throughfall observations due to fog deposition inside the forest during the summer season that excluded the effect of forest edges. Using the linear relationship between fog deposition and altitude given by the fog-WRF calculations and the data from throughfall observations at a given altitude, the vertical distribution of fog deposition can be roughly estimated in mountain forests. A meteorological model that includes fog deposition will be useful in mapping fog deposition in mountain cloud forests.

  3. An accurate procedure for estimating the phase speed of ocean waves from observations by satellite borne altimeters

    NASA Astrophysics Data System (ADS)

    De-Leon, Yair; Paldor, Nathan

    2017-08-01

    Observations of sea surface height (SSH) fields using satellite borne altimeters were conducted starting in the 1990s in various parts of the world ocean. Currently, a long period of 20 years of calibrated and accurate altimeter observations of Sea Surface Height Anomalies (SSHA) is publically available and ready to be examined for determining the rate of westward propagation of these anomalies, which are interpreted as a surface manifestation of linear Rossby waves that propagate westward in the ocean thermocline or as nonlinear eddies. The basis for estimating the speed of westward propagation of SSHA is time-longitude (Hovmöller) diagrams of the SSHA field at fixed latitude. In such a diagram the westward propagation is evident from a left-upward tilt of constant SSHA values (i.e. contours) and the angle between this tilt and the ordinate is directly proportional to the speed of westward propagation. In this work we use synthetically generated noisy data to examine the accuracy of three different methods that have been separately used in previous studies for estimating this slope (angle) of the time-longitude diagram: The first is the application of Radon transform, used in image processing for detecting structures on an image. The second method is the application of 2D Fast Fourier Transform that yields a frequency-wavenumber diagram of the amplitudes so the frequency and wavenumber where the maximum amplitude occurs determine the phase speed i.e. the slope. The third method constitutes an adaptation of Radon transform to a propagating wave in which structures of minimal variance in the image are identified. The three methods do not always yield the same phase speed value and our analysis of the synthetic data shows that an estimate of the phase speed at any given latitude should be considered valid only when at least two of the methods yield the same value. The relevance of the suggested procedure to observed signals is verified by applying it to observed

  4. A method for estimating peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area

    USGS Publications Warehouse

    Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.

    2011-01-01

    Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational

  5. 49 CFR Appendix G to Part 222 - Excess Risk Estimates for Public Highway-Rail Grade Crossings

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... estimate Nation (Except Florida East Coast Railway and Chicago Region Crossings) Passive 74.9. Flashers.... Chicago Region Crossings Passive To be determined. Flashers only To be determined. Flashers with gates To...

  6. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi

  7. The Magnitude of Time-Dependent Bias in the Estimation of Excess Length of Stay Attributable to Healthcare-Associated Infections.

    PubMed

    Nelson, Richard E; Nelson, Scott D; Khader, Karim; Perencevich, Eli L; Schweizer, Marin L; Rubin, Michael A; Graves, Nicholas; Harbarth, Stephan; Stevens, Vanessa W; Samore, Matthew H

    2015-09-01

    BACKGROUND Estimates of the excess length of stay (LOS) attributable to healthcare-associated infections (HAIs) in which total LOS of patients with and without HAIs are biased because of failure to account for the timing of infection. Alternate methods that appropriately treat HAI as a time-varying exposure are multistate models and cohort studies, which match regarding the time of infection. We examined the magnitude of this time-dependent bias in published studies that compared different methodological approaches. METHODS We conducted a systematic review of the published literature to identify studies that report attributable LOS estimates using both total LOS (time-fixed) methods and either multistate models or matching patients with and without HAIs using the timing of infection. RESULTS Of the 7 studies that compared time-fixed methods to multistate models, conventional methods resulted in estimates of the LOS to HAIs that were, on average, 9.4 days longer or 238% greater than those generated using multistate models. Of the 5 studies that compared time-fixed methods to matching on timing of infection, conventional methods resulted in estimates of the LOS to HAIs that were, on average, 12.6 days longer or 139% greater than those generated by matching on timing of infection. CONCLUSION Our results suggest that estimates of the attributable LOS due to HAIs depend heavily on the methods used to generate those estimates. Overestimation of this effect can lead to incorrect assumptions of the likely cost savings from HAI prevention measures.

  8. Estimating the economic value of ice climbing in Hyalite Canyon: An application of travel cost count data models that account for excess zeros.

    PubMed

    Anderson, D Mark

    2010-01-01

    Recently, the sport of ice climbing has seen a dramatic increase in popularity. This paper uses the travel cost method to estimate the demand for ice climbing in Hyalite Canyon, Montana, one of the premier ice climbing venues in North America. Access to Hyalite and other ice climbing destinations have been put at risk due to liability issues, public land management agendas, and winter road conditions. To this point, there has been no analysis on the economic benefits of ice climbing. In addition to the novel outdoor recreation application, this study applies econometric methods designed to deal with "excess zeros" in the data. Depending upon model specification, per person per trip values are estimated to be in the range of $76 to $135.

  9. Estimating the Economic Value of Ice Climbing in Hyalite Canyon: An Application of Travel Cost Count Data Models that Account for Excess Zeros*

    PubMed Central

    Anderson, D. Mark

    2009-01-01

    Recently, the sport of ice climbing has seen a drastic increase in popularity. This paper uses the travel cost method to estimate the demand for ice climbing in Hyalite Canyon, Montana, one of the premier ice climbing venues in North America. Access to Hyalite and other ice climbing destinations have been put at risk due to liability issues, public land management agendas, and winter road conditions. To this point, there has been no analysis on the economic benefits of ice climbing. In addition to the novel outdoor recreation application, this study applies econometric methods designed to deal with “excess zeros” in the data. Depending upon model specification, per person per trip values are estimated to be in the range of $76 to $135. PMID:20044202

  10. Estimating the decline in excess risk of chronic obstructive pulmonary disease following quitting smoking - a systematic review based on the negative exponential model.

    PubMed

    Lee, Peter N; Fry, John S; Forey, Barbara A

    2014-03-01

    We quantified the decline in COPD risk following quitting using the negative exponential model, as previously carried out for other smoking-related diseases. We identified 14 blocks of RRs (from 11 studies) comparing current smokers, former smokers (by time quit) and never smokers, some studies providing sex-specific blocks. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We estimated the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block, except for one where no decline with quitting was evident, and H was not estimable. For the remaining 13 blocks, goodness-of-fit to the model was generally adequate, the combined estimate of H being 13.32 (95% CI 11.86-14.96) years. There was no heterogeneity in H, overall or by various studied sources. Sensitivity analyses allowing for reverse causation or different assumed times for the final quitting period little affected the results. The model summarizes quitting data well. The estimate of 13.32years is substantially larger than recent estimates of 4.40years for ischaemic heart disease and 4.78years for stroke, and also larger than the 9.93years for lung cancer. Heterogeneity was unimportant for COPD, unlike for the other three diseases.

  11. Estimation of excess mortality due to long-term exposure to PM2.5 in Japan using a high-resolution model for present and future scenarios

    NASA Astrophysics Data System (ADS)

    Goto, Daisuke; Ueda, Kayo; Ng, Chris Fook Sheng; Takami, Akinori; Ariga, Toshinori; Matsuhashi, Keisuke; Nakajima, Teruyuki

    2016-09-01

    Particulate matter with a diameter of less than 2.5 μm, known as PM2.5, can affect human health, especially in elderly people. Because of the imminent aging of society in the near future in most developed countries, the human health impacts of PM2.5 must be evaluated. In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a high-resolution stretched grid system (∼10 km for the high-resolution model, HRM) for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). We also used the same model with a low-resolution uniform grid system (∼100 km for the low-resolution model, LRM). These calculations were conducted by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 among the elderly (over 65 years old) based on different minimum PM2.5 concentration (MINPM) levels to account for uncertainty using the simulated PM2.5 distributions to express the health effect as a concentration-response function. As a result, we estimated the excess mortality for all of Japan to be 31,300 (95% confidence intervals: 20,700 to 42,600) people in 2000 and 28,600 (95% confidence intervals: 19,000 to 38,700) people in 2030 using the HRM with a MINPM of 5.8 μg/m3. In contrast, the LRM resulted in underestimates of approximately 30% (for PM2.5 concentrations in the 2000 and 2030), approximately 60% (excess mortality in the 2000) and approximately 90% (excess mortality in 2030) compared to the HRM results. We also found that the uncertainty in the MINPM value, especially for low PM2.5 concentrations in the future (2030) can cause large variability in the estimates, ranging from 0 (MINPM of 15 μg/m3 in both HRM and LRM) to 95,000 (MINPM of 0 μg/m3 in HRM) people.

  12. Excessive Tanning

    PubMed Central

    Sansone, Lori A.

    2010-01-01

    Excessive tanning appears to be evident in about one quarter of regular sunbathers. Susceptible individuals are likely to be young Caucasians from Western societies. Despite ongoing education by the media to the public about the risks of excessive exposure to ultraviolet radiation and the availability of potent sunscreens, there seems to be a concurrent proliferation of tanning facilities. What might be potential psychological explanations for excessive or pathological tanning? Psychopathological explanations may exist on both Axes I and II and include substance use, obsessive-compulsive, body dysmorphic, and borderline personality disorders. While there is no known treatment for pathological sunbathing, we discuss several treatment interventions from the literature that have been successfully used for the general public. PMID:20622941

  13. Impact of measurement error in radon exposure on the estimated excess relative risk of lung cancer death in a simulated study based on the French Uranium Miners' Cohort.

    PubMed

    Allodji, Rodrigue S; Leuraud, Klervi; Thiébaut, Anne C M; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques

    2012-05-01

    Measurement error (ME) can lead to bias in the analysis of epidemiologic studies. Here a simulation study is described that is based on data from the French Uranium Miners' Cohort and that was conducted to assess the effect of ME on the estimated excess relative risk (ERR) of lung cancer death associated with radon exposure. Starting from a scenario without any ME, data were generated containing successively Berkson or classical ME depending on time periods, to reflect changes in the measurement of exposure to radon ((222)Rn) and its decay products over time in this cohort. Results indicate that ME attenuated the level of association with radon exposure, with a negative bias percentage on the order of 60% on the ERR estimate. Sensitivity analyses showed the consequences of specific ME characteristics (type, size, structure, and distribution) on the ERR estimates. In the future, it appears important to correct for ME upon analyzing cohorts such as this one to decrease bias in estimates of the ERR of adverse events associated with exposure to ionizing radiation.

  14. Methodological extensions of meta-analysis with excess relative risk estimates: application to risk of second malignant neoplasms among childhood cancer survivors treated with radiotherapy.

    PubMed

    Doi, Kazutaka; Mieno, Makiko N; Shimada, Yoshiya; Yonehara, Hidenori; Yoshinaga, Shinji

    2014-09-01

    Although radiotherapy is recognized as an established risk factor for second malignant neoplasms (SMNs), the dose response of SMNs following radiotherapy has not been well characterized. In our previous meta-analysis of the risks of SMNs occurring among children who have received radiotherapy, the small number of eligible studies precluded a detailed evaluation. Therefore, to increase the number of eligible studies, we developed a method of calculating excess relative risk (ERR) per Gy estimates from studies for which the relative risk estimates for several dose categories were available. Comparing the calculated ERR with that described in several original papers validated the proposed method. This enabled us to increase the number of studies, which we used to conduct a meta-analysis. The overall ERR per Gy estimate of radiotherapy over 26 relevant studies was 0.60 (95%CI: 0.30-1.20), which is smaller than the corresponding estimate for atomic bomb survivors exposed to radiation as young children (1.7; 95% CI: 1.1-2.5). A significant decrease in ERR per Gy with increase in age at exposure (0.85 times per annual increase) was observed in the meta-regression. Heterogeneity was suggested by Cochran's Q statistic (P < 0.001), which may be partly accounted for by age at exposure.

  15. Methodological extensions of meta-analysis with excess relative risk estimates: application to risk of second malignant neoplasms among childhood cancer survivors treated with radiotherapy

    PubMed Central

    Doi, Kazutaka; Mieno, Makiko N.; Shimada, Yoshiya; Yonehara, Hidenori; Yoshinaga, Shinji

    2014-01-01

    Although radiotherapy is recognized as an established risk factor for second malignant neoplasms (SMNs), the dose response of SMNs following radiotherapy has not been well characterized. In our previous meta-analysis of the risks of SMNs occurring among children who have received radiotherapy, the small number of eligible studies precluded a detailed evaluation. Therefore, to increase the number of eligible studies, we developed a method of calculating excess relative risk (ERR) per Gy estimates from studies for which the relative risk estimates for several dose categories were available. Comparing the calculated ERR with that described in several original papers validated the proposed method. This enabled us to increase the number of studies, which we used to conduct a meta-analysis. The overall ERR per Gy estimate of radiotherapy over 26 relevant studies was 0.60 (95%CI: 0.30–1.20), which is smaller than the corresponding estimate for atomic bomb survivors exposed to radiation as young children (1.7; 95% CI: 1.1–2.5). A significant decrease in ERR per Gy with increase in age at exposure (0.85 times per annual increase) was observed in the meta-regression. Heterogeneity was suggested by Cochran's Q statistic (P < 0.001), which may be partly accounted for by age at exposure. PMID:25037101

  16. Counseling for fetal macrosomia: an estimated fetal weight of 4,000 g is excessively low.

    PubMed

    Peleg, David; Warsof, Steven; Wolf, Maya Frank; Perlitz, Yuri; Shachar, Inbar Ben

    2015-01-01

    Because of the known complications of fetal macrosomia, our hospital's policy has been to discuss the risks of shoulder dystocia and cesarean section (CS) in mothers with a sonographic estimated fetal weight (SEFW) ≥ 4,000 g at term. The present study was performed to determine the effect of this policy on CS rates and pregnancy outcome. We examined the pregnancy outcomes of the macrosomic (≥ 4,000 g) neonates in two cohorts of nondiabetic low risk women at term without preexisting indications for cesarean: (1) SEFW ≥ 4,000 g (correctly suspected macrosomia) and (2) SEFW < 4,000 g (unsuspected macrosomia). There were 238 neonates in the correctly suspected group and 205 neonates in the unsuspected macrosomia group, respectively. Vaginal delivery was accomplished in 52.1% of the suspected group and 90.7% of the unsuspected group, respectively, p < 0.001. There was no difference in the rates of shoulder dystocia. The odds ratio for CS was 9.0 (95% confidence interval, 5.3-15.4) when macrosomia was correctly suspected. The policy of discussing the risk of macrosomia with SEFW ≥ 4,000 g to women is not justified. A higher SEFW to trigger counseling for shoulder dystocia and CS, more consistent with American College of Obstetrics and Gynecology (ACOG) guidelines, should be considered. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    PubMed

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-05-07

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.

  18. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  19. Markov chain Monte Carlo estimation of a multiparameter decision model: consistency of evidence and the accurate assessment of uncertainty.

    PubMed

    Ades, A E; Cliffe, S

    2002-01-01

    Decision models are usually populated 1 parameter at a time, with 1 item of information informing each parameter. Often, however, data may not be available on the parameters themselves but on several functions of parameters, and there may be more items of information than there are parameters to be estimated. The authors show how in these circumstances all the model parameters can be estimated simultaneously using Bayesian Markov chain Monte Carlo methods. Consistency of the information and/or the adequacy of the model can also be assessed within this framework. Statistical evidence synthesis using all available data should result in more precise estimates of parameters and functions of parameters, and is compatible with the emphasis currently placed on systematic use of evidence. To illustrate this, WinBUGS software is used to estimate a simple 9-parameter model of the epidemiology of HIV in women attending prenatal clinics, using information on 12 functions of parameters, and to thereby compute the expected net benefit of 2 alternative prenatal testing strategies, universal testing and targeted testing of high-risk groups. The authors demonstrate improved precision of estimates, and lower estimates of the expected value of perfect information, resulting from the use of all available data.

  20. Developing accurate survey methods for estimating population sizes and trends of the critically endangered Nihoa Millerbird and Nihoa Finch.

    USGS Publications Warehouse

    Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris

    2012-01-01

    Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa

  1. Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets.

    PubMed

    Dorval, Alan D

    2008-08-15

    The maximal information that the spike train of any neuron can pass on to subsequent neurons can be quantified as the neuronal firing pattern entropy. Difficulties associated with estimating entropy from small datasets have proven an obstacle to the widespread reporting of firing pattern entropies and more generally, the use of information theory within the neuroscience community. In the most accessible class of entropy estimation techniques, spike trains are partitioned linearly in time and entropy is estimated from the probability distribution of firing patterns within a partition. Ample previous work has focused on various techniques to minimize the finite dataset bias and standard deviation of entropy estimates from under-sampled probability distributions on spike timing events partitioned linearly in time. In this manuscript we present evidence that all distribution-based techniques would benefit from inter-spike intervals being partitioned in logarithmic time. We show that with logarithmic partitioning, firing rate changes become independent of firing pattern entropy. We delineate the entire entropy estimation process with two example neuronal models, demonstrating the robust improvements in bias and standard deviation that the logarithmic time method yields over two widely used linearly partitioned time approaches.

  2. Genomic instability related to zinc deficiency and excess in an in vitro model: is the upper estimate of the physiological requirements recommended for children safe?

    PubMed

    Padula, Gisel; Ponzinibbio, María Virginia; Gambaro, Rocío Celeste; Seoane, Analía Isabel

    2017-08-01

    Micronutrients are important for the prevention of degenerative diseases due to their role in maintaining genomic stability. Therefore, there is international concern about the need to redefine the optimal mineral and vitamin requirements to prevent DNA damage. We analyzed the cytostatic, cytotoxic, and genotoxic effect of in vitro zinc supplementation to determine the effects of zinc deficiency and excess and whether the upper estimate of the physiological requirement recommended for children is safe. To achieve zinc deficiency, DMEM/Ham's F12 medium (HF12) was chelated (HF12Q). Lymphocytes were isolated from healthy female donors (age range, 5-10 yr) and cultured for 7 d as follows: negative control (HF12, 60 μg/dl ZnSO4); deficient (HF12Q, 12 μg/dl ZnSO4); lower level (HF12Q + 80 μg/dl ZnSO4); average level (HF12Q + 180 μg/dl ZnSO4); upper limit (HF12Q + 280 μg/dl ZnSO4); and excess (HF12Q + 380 μg/dl ZnSO4). The comet (quantitative analysis) and cytokinesis-block micronucleus cytome assays were used. Differences were evaluated with Kruskal-Wallis and ANOVA (p < 0.05). Olive tail moment, tail length, micronuclei frequency, and apoptotic and necrotic percentages were significantly higher in the deficient, upper limit, and excess cultures compared with the negative control, lower, and average limit ones. In vitro zinc supplementation at the lower and average limit (80 and 180 μg/dl ZnSO4) of the physiological requirement recommended for children proved to be the most beneficial in avoiding genomic instability, whereas the deficient, upper limit, and excess (12, 280, and 380 μg/dl) cultures increased DNA and chromosomal damage and apoptotic and necrotic frequencies.

  3. How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates

    ERIC Educational Resources Information Center

    Otterbach, Steffen; Sousa-Poza, Alfonso

    2010-01-01

    This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…

  4. How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates

    ERIC Educational Resources Information Center

    Otterbach, Steffen; Sousa-Poza, Alfonso

    2010-01-01

    This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…

  5. The number of alleles at a microsatellite defines the allele frequency spectrum and facilitates fast accurate estimation of theta.

    PubMed

    Haasl, Ryan J; Payseur, Bret A

    2010-12-01

    Theoretical work focused on microsatellite variation has produced a number of important results, including the expected distribution of repeat sizes and the expected squared difference in repeat size between two randomly selected samples. However, closed-form expressions for the sampling distribution and frequency spectrum of microsatellite variation have not been identified. Here, we use coalescent simulations of the stepwise mutation model to develop gamma and exponential approximations of the microsatellite allele frequency spectrum, a distribution central to the description of microsatellite variation across the genome. For both approximations, the parameter of biological relevance is the number of alleles at a locus, which we express as a function of θ, the population-scaled mutation rate, based on simulated data. Discovered relationships between θ, the number of alleles, and the frequency spectrum support the development of three new estimators of microsatellite θ. The three estimators exhibit roughly similar mean squared errors (MSEs) and all are biased. However, across a broad range of sample sizes and θ values, the MSEs of these estimators are frequently lower than all other estimators tested. The new estimators are also reasonably robust to mutation that includes step sizes greater than one. Finally, our approximation to the microsatellite allele frequency spectrum provides a null distribution of microsatellite variation. In this context, a preliminary analysis of the effects of demographic change on the frequency spectrum is performed. We suggest that simulations of the microsatellite frequency spectrum under evolutionary scenarios of interest may guide investigators to the use of relevant and sometimes novel summary statistics.

  6. A simple method for accurate liver volume estimation by use of curve-fitting: a pilot study.

    PubMed

    Aoyama, Masahito; Nakayama, Yoshiharu; Awai, Kazuo; Inomata, Yukihiro; Yamashita, Yasuyuki

    2013-01-01

    In this paper, we describe the effectiveness of our curve-fitting method by comparing liver volumes estimated by our new technique to volumes obtained with the standard manual contour-tracing method. Hepatic parenchymal-phase images of 13 patients were obtained with multi-detector CT scanners after intravenous bolus administration of 120-150 mL of contrast material (300 mgI/mL). The liver contours of all sections were traced manually by an abdominal radiologist, and the liver volume was computed by summing of the volumes inside the contours. The section number between the first and last slice was then divided into 100 equal parts, and each volume was re-sampled by use of linear interpolation. We generated 13 model profile curves by averaging 12 cases, leaving out one case, and we estimated the profile curve for each patient by fitting the volume values at 4 points using a scale and translation transform. Finally, we determined the liver volume by integrating the sampling points of the profile curve. We used Bland-Altman analysis to evaluate the agreement between the volumes estimated with our curve-fitting method and the volumes measured by the manual contour-tracing method. The correlation between the volume measured by manual tracing and that estimated with our curve-fitting method was relatively high (r = 0.98; slope 0.97; p < 0.001). The mean difference between the manual tracing and our method was -22.9 cm(3) (SD of the difference, 46.2 cm(3)). Our volume-estimating technique that requires the tracing of only 4 images exhibited a relatively high linear correlation with the manual tracing technique.

  7. How accurate are estimates of glacier ice thickness? Results from ITMIX, the Ice Thickness Models Intercomparison eXperiment

    NASA Astrophysics Data System (ADS)

    Farinotti, Daniel; Brinkerhoff, Douglas J.; Clarke, Garry K. C.; Fürst, Johannes J.; Frey, Holger; Gantayat, Prateek; Gillet-Chaulet, Fabien; Girard, Claire; Huss, Matthias; Leclercq, Paul W.; Linsbauer, Andreas; Machguth, Horst; Martin, Carlos; Maussion, Fabien; Morlighem, Mathieu; Mosbeux, Cyrille; Pandit, Ankur; Portmann, Andrea; Rabatel, Antoine; Ramsankaran, RAAJ; Reerink, Thomas J.; Sanchez, Olivier; Stentoft, Peter A.; Singh Kumari, Sangita; van Pelt, Ward J. J.; Anderson, Brian; Benham, Toby; Binder, Daniel; Dowdeswell, Julian A.; Fischer, Andrea; Helfricht, Kay; Kutuzov, Stanislav; Lavrentiev, Ivan; McNabb, Robert; Hilmar Gudmundsson, G.; Li, Huilin; Andreassen, Liss M.

    2017-04-01

    Knowledge of the ice thickness distribution of glaciers and ice caps is an important prerequisite for many glaciological and hydrological investigations. A wealth of approaches has recently been presented for inferring ice thickness from characteristics of the surface. With the Ice Thickness Models Intercomparison eXperiment (ITMIX) we performed the first coordinated assessment quantifying individual model performance. A set of 17 different models showed that individual ice thickness estimates can differ considerably - locally by a spread comparable to the observed thickness. Averaging the results of multiple models, however, significantly improved the results: on average over the 21 considered test cases, comparison against direct ice thickness measurements revealed deviations on the order of 10 ± 24 % of the mean ice thickness (1σ estimate). Models relying on multiple data sets - such as surface ice velocity fields, surface mass balance, or rates of ice thickness change - showed high sensitivity to input data quality. Together with the requirement of being able to handle large regions in an automated fashion, the capacity of better accounting for uncertainties in the input data will be a key for an improved next generation of ice thickness estimation approaches.

  8. Is the SenseWear Armband accurate enough to quantify and estimate energy expenditure in healthy adults?

    PubMed

    Santos-Lozano, Alejandro; Hernández-Vicente, Adrián; Pérez-Isaac, Raúl; Santín-Medeiros, Fernanda; Cristi-Montero, Carlos; Casajús, Jose Antonio; Garatachea, Nuria

    2017-03-01

    The SenseWear Armband (SWA) is a monitor that can be used to estimate energy expenditure (EE); however, it has not been validated in healthy adults. The objective of this paper was to study the validity of the SWA for quantifying EE levels. Twenty-three healthy adults (age 40-55 years, mean: 48±3.42 years) performed different types of standardized physical activity (PA) for 10 minutes (rest, walking at 3 and 5 km·h(-1), running at 7 and 9 km·h(-1), and sitting/standing at a rate of 30 cycle·min(-1)). Participants wore the SWA on their right arm, and their EE was measured by indirect calorimetry (IC) the gold standard. There were significant differences between the SWA and IC, except in the group that ran at 9 km·h(-1) (>9 METs). Bland-Altman analysis showed a BIAS of 1.56 METs (±1.83 METs) and limits of agreement (LOA) at 95% of -2.03 to 5.16 METs. There were indications of heteroscedasticity (R(2) =0.03; P<0.05). Analysis of the receiver operating characteristic (ROC) curves showed that the SWA seems to be not sensitive enough to estimate the level of EE at highest intensities. The SWA is not as precise in estimating EE as IC, but it could be a useful tool to determine levels of EE at low intensities.

  9. Is the SenseWear Armband accurate enough to quantify and estimate energy expenditure in healthy adults?

    PubMed Central

    Hernández-Vicente, Adrián; Pérez-Isaac, Raúl; Santín-Medeiros, Fernanda; Cristi-Montero, Carlos; Casajús, Jose Antonio; Garatachea, Nuria

    2017-01-01

    Background The SenseWear Armband (SWA) is a monitor that can be used to estimate energy expenditure (EE); however, it has not been validated in healthy adults. The objective of this paper was to study the validity of the SWA for quantifying EE levels. Methods Twenty-three healthy adults (age 40–55 years, mean: 48±3.42 years) performed different types of standardized physical activity (PA) for 10 minutes (rest, walking at 3 and 5 km·h-1, running at 7 and 9 km·h-1, and sitting/standing at a rate of 30 cycle·min-1). Participants wore the SWA on their right arm, and their EE was measured by indirect calorimetry (IC) the gold standard. Results There were significant differences between the SWA and IC, except in the group that ran at 9 km·h-1 (>9 METs). Bland-Altman analysis showed a BIAS of 1.56 METs (±1.83 METs) and limits of agreement (LOA) at 95% of −2.03 to 5.16 METs. There were indications of heteroscedasticity (R2 =0.03; P<0.05). Analysis of the receiver operating characteristic (ROC) curves showed that the SWA seems to be not sensitive enough to estimate the level of EE at highest intensities. Conclusions The SWA is not as precise in estimating EE as IC, but it could be a useful tool to determine levels of EE at low intensities. PMID:28361062

  10. Improved age modelling and high-precision age estimates of late Quaternary tephras, for accurate palaeoclimate reconstruction

    NASA Astrophysics Data System (ADS)

    Blockley, Simon P. E.; Bronk Ramsey, C.; Pyle, D. M.

    2008-10-01

    The role of tephrochronology, as a dating and stratigraphic tool, in precise palaeoclimate and environmental reconstruction, has expanded significantly in recent years. The power of tephrochronology rests on the fact that a tephra layer can stratigraphically link records at the resolution of as little as a few years, and that the most precise age for a particular tephra can be imported into any site where it is found. In order to maximise the potential of tephras for this purpose it is necessary to have the most precise and robustly tested age estimate possible available for key tephras. Given the varying number and quality of dates associated with different tephras it is important to be able to build age models to test competing tephra dates. Recent advances in Bayesian age modelling of dates in sequence have radically extended our ability to build such stratigraphic age models. As an example of the potential here we use Bayesian methods, now widely applied, to examine the dating of some key Late Quaternary tephras from Italy. These are: the Agnano Monte Spina Tephra (AMST), the Neapolitan Yellow Tuff (NYT) and the Agnano Pomici Principali (APP), and all of them have multiple estimates of their true age. Further, we use the Bayesian approaches to generate a revised mixed radiocarbon/varve chronology for the important Lateglacial section of the Lago Grande Monticchio record, as a further illustration of what can be achieved by a Bayesian approach. With all three tephras we were able to produce viable model ages for the tephra, validate the proposed 40Ar/ 39Ar age ranges for these tephras, and provide relatively high precision age models. The results of the Bayesian integration of dating and stratigraphic information, suggest that the current best 95% confidence calendar age estimates for the AMST are 4690-4300 cal BP, the NYT 14320-13900 cal BP, and the APP 12380-12140 cal BP.

  11. Development of Deep Learning Based Data Fusion Approach for Accurate Rainfall Estimation Using Ground Radar and Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.

    2016-12-01

    Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is

  12. Measurement of pelvic motion is a prerequisite for accurate estimation of hip joint work in maximum height squat jumping.

    PubMed

    Blache, Yoann; Bobbert, Maarten; Argaud, Sebastien; Pairot de Fontenay, Benoit; Monteil, Karine M

    2013-08-01

    In experiments investigating vertical squat jumping, the HAT segment is typically defined as a line drawn from the hip to some point proximally on the upper body (eg, the neck, the acromion), and the hip joint as the angle between this line and the upper legs (θUL-HAT). In reality, the hip joint is the angle between the pelvis and the upper legs (θUL-pelvis). This study aimed to estimate to what extent hip joint definition affects hip joint work in maximal squat jumping. Moreover, the initial pelvic tilt was manipulated to maximize the difference in hip joint work as a function of hip joint definition. Twenty-two male athletes performed maximum effort squat jumps in three different initial pelvic tilt conditions: backward (pelvisB), neutral (pelvisN), and forward (pelvisF). Hip joint work was calculated by integrating the hip net joint torque with respect to θUL-HAT (WUL-HAT) or with respect to θUL-pelvis (WUL-pelvis). θUL-HAT was greater than θUL-pelvis in all conditions. WUL-HAT overestimated WULpelvis by 33%, 39%, and 49% in conditions pelvisF, pelvisN, and pelvisB, respectively. It was concluded that θUL-pelvis should be measured when the mechanical output of hip extensor muscles is estimated.

  13. How have ART treatment programmes changed the patterns of excess mortality in people living with HIV? Estimates from four countries in East and Southern Africa

    PubMed Central

    Slaymaker, Emma; Todd, Jim; Marston, Milly; Calvert, Clara; Michael, Denna; Nakiyingi-Miiro, Jessica; Crampin, Amelia; Lutalo, Tom; Herbst, Kobus; Zaba, Basia

    2014-01-01

    Background Substantial falls in the mortality of people living with HIV (PLWH) have been observed since the introduction of antiretroviral therapy (ART) in sub-Saharan Africa. However, access and uptake of ART have been variable in many countries. We report the excess deaths observed in PLWH before and after the introduction of ART. We use data from five longitudinal studies in Malawi, South Africa, Tanzania, and Uganda, members of the network for Analysing Longitudinal Population-based HIV/AIDS data on Africa (ALPHA). Methods Individual data from five demographic surveillance sites that conduct HIV testing were used to estimate mortality attributable to HIV, calculated as the difference between the mortality rates in PLWH and HIV-negative people. Excess deaths in PLWH were standardized for age and sex differences and summarized over periods before and after ART became generally available. An exponential regression model was used to explore differences in the impact of ART over the different sites. Results 127,585 adults across the five sites contributed a total of 487,242 person years. Before the introduction of ART, HIV-attributable mortality ranged from 45 to 88 deaths per 1,000 person years. Following ART availability, this reduced to 14–46 deaths per 1,000 person years. Exponential regression modeling showed a reduction of more than 50% (HR =0.43, 95% CI: 0.32–0.58), compared to the period before ART was available, in mortality at ages 15–54 across all five sites. Discussion Excess mortality in adults living with HIV has reduced by over 50% in five communities in sub-Saharan Africa since the advent of ART. However, mortality rates in adults living with HIV are still 10 times higher than in HIV-negative people, indicating that substantial improvements can be made to reduce mortality further. This analysis shows differences in the impact across the sites, and contrasts with developed countries where mortality among PLWH on ART can be similar to that of the

  14. HIV Excess Cancers JNCI

    Cancer.gov

    In 2010, an estimated 7,760 new cancers were diagnosed among the nearly 900,000 Americans known to be living with HIV infection. According to the first comprehensive study in the United States, approximately half of these cancers were in excess of what wo

  15. "I know what you told me, but this is what I think:" perceived risk of Alzheimer disease among individuals who accurately recall their genetics-based risk estimate.

    PubMed

    Linnenbringer, Erin; Roberts, J Scott; Hiraki, Susan; Cupples, L Adrienne; Green, Robert C

    2010-04-01

    This study evaluates the Alzheimer disease risk perceptions of individuals who accurately recall their genetics-based Alzheimer disease risk assessment. Two hundred forty-six unaffected first-degree relatives of patients with Alzheimer disease were enrolled in a multisite randomized controlled trial examining the effects of communicating APOE genotype and lifetime Alzheimer disease risk information. Among the 158 participants who accurately recalled their Alzheimer disease risk assessment 6 weeks after risk disclosure, 75 (47.5%) believed their Alzheimer disease risk was more than 5% points different from the Alzheimer disease risk estimate they were given. Within this subgroup, 69.3% believed that their Alzheimer disease risk was higher than what they were told (discordant high), whereas 30.7% believed that their Alzheimer disease risk was lower (discordant low). Participants with a higher baseline risk perception were more likely to have a discordant-high risk perception (P < 0.05). Participants in the discordant-low group were more likely to be APOE epsilon4 positive (P < 0.05) and to score higher on an Alzheimer disease controllability scale (P < 0.05). Our results indicate that even among individuals who accurately recall their Alzheimer disease risk assessment, many people do not take communicated risk estimates at face value. Further exploration of this clinically relevant response to risk information is warranted.

  16. How accurate is the estimation of anthropogenic carbon in the ocean? An evaluation of the ΔC* method

    NASA Astrophysics Data System (ADS)

    Matsumoto, Katsumi; Gruber, Nicolas

    2005-09-01

    The ΔC* method of Gruber et al. (1996) is widely used to estimate the distribution of anthropogenic carbon in the ocean; however, as yet, no thorough assessment of its accuracy has been made. Here we provide a critical re-assessment of the method and determine its accuracy by applying it to synthetic data from a global ocean biogeochemistry model, for which we know the "true" anthropogenic CO2 distribution. Our results indicate that the ΔC* method tends to overestimate anthropogenic carbon in relatively young waters but underestimate it in older waters. Main sources of these biases are (1) the time evolution of the air-sea CO2 disequilibrium, which is not properly accounted for in the ΔC* method, (2) a pCFC ventilation age bias that arises from mixing, and (3) errors in identifying the different end-member water types. We largely support the findings of Hall et al. (2004), who have also identified the first two bias sources. An extrapolation of the errors that we quantified on a number of representative isopycnals to the global ocean suggests a positive bias of about 7% in the ΔC*-derived global anthropogenic CO2 inventory. The magnitude of this bias is within the previously estimated 20% uncertainty of the method, but regional biases can be larger. Finally, we propose two improvements to the ΔC* method in order to account for the evolution of air-sea CO2 disequilibrium and the ventilation age mixing bias.

  17. Reliability of Cardiovascular Risk Calculators to Estimate Accurately the Risk of Cardiovascular Disease in Patients With Sarcoidosis.

    PubMed

    Ungprasert, Patompong; Matteson, Eric L; Crowson, Cynthia S

    2017-09-01

    Chronic inflammation is an independent risk factor for cardiovascular disease (CVD), but most risk calculators, including the Framingham risk score (FRS) and the American College of Cardiology (ACC)/American Heart Association (AHA) risk score do not account for it. These calculators underestimate cardiovascular risk in patients with rheumatoid arthritis and systemic lupus erythematosus. To date, how these scores perform in the estimation of CVD risk in patients with sarcoidosis has not been assessed. In this study, the FRS and the ACC/AHA risk score were calculated for a previously identified cohort of patients with incident cases of sarcoidosis in Olmsted County, Minnesota, United States, from 1989 to 2013 as well as their gender- and age-matched comparators. The standardized incidence ratio (SIR) was estimated as the ratio of the predicted and observed numbers of CVD events. All CVD events were identified by diagnosis codes and were verified by individual medical record reviews. The predicted number of CVD events among 188 cases by FRS was 11.8 and the observed number of CVD events was 34, which corresponded to an SIR of 2.88 (95% confidence interval 2.06 to 4.04). FRS underestimated the risk of CVD events in patients with sarcoidosis by gender, age and severity of sarcoidosis. The predicted number of CVD events among cases by ACC/AHA risk score was 4.6 and the observed number of CVD events was 19, corresponding to an SIR of 4.11 (95% confidence interval 2.62 to 6.44). In conclusion, the FRS and the ACC/AHA risk score underestimate the risk of CVD in patients with sarcoidosis. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. How many measurements are needed to estimate accurate daily and annual soil respiration fluxes? Analysis using data from a temperate rainforest

    NASA Astrophysics Data System (ADS)

    Perez-Quezada, Jorge F.; Brito, Carla E.; Cabezas, Julián; Galleguillos, Mauricio; Fuentes, Juan P.; Bown, Horacio E.; Franck, Nicolás

    2016-12-01

    Making accurate estimations of daily and annual Rs fluxes is key for understanding the carbon cycle process and projecting effects of climate change. In this study we used high-frequency sampling (24 measurements per day) of Rs in a temperate rainforest during 1 year, with the objective of answering the questions of when and how often measurements should be made to obtain accurate estimations of daily and annual Rs. We randomly selected data to simulate samplings of 1, 2, 4 or 6 measurements per day (distributed either during the whole day or only during daytime), combined with 4, 6, 12, 26 or 52 measurements per year. Based on the comparison of partial-data series with the full-data series, we estimated the performance of different partial sampling strategies based on bias, precision and accuracy. In the case of annual Rs estimation, we compared the performance of interpolation vs. using non-linear modelling based on soil temperature. The results show that, under our study conditions, sampling twice a day was enough to accurately estimate daily Rs (RMSE < 10 % of average daily flux), even if both measurements were done during daytime. The highest reduction in RMSE for the estimation of annual Rs was achieved when increasing from four to six measurements per year, but reductions were still relevant when further increasing the frequency of sampling. We found that increasing the number of field campaigns was more effective than increasing the number of measurements per day, provided a minimum of two measurements per day was used. Including night-time measurements significantly reduced the bias and was relevant in reducing the number of field campaigns when a lower level of acceptable error (RMSE < 5 %) was established. Using non-linear modelling instead of linear interpolation did improve the estimation of annual Rs, but not as expected. In conclusion, given that most of the studies of Rs use manual sampling techniques and apply only one measurement per day, we

  19. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo

    PubMed Central

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  20. ReFOLD: a server for the refinement of 3D protein models guided by accurate quality estimates.

    PubMed

    Shuid, Ahmad N; Kempster, Robert; McGuffin, Liam J

    2017-04-10

    ReFOLD is a novel hybrid refinement server with integrated high performance global and local Accuracy Self Estimates (ASEs). The server attempts to identify and to fix likely errors in user supplied 3D models of proteins via successive rounds of refinement. The server is unique in providing output for multiple alternative refined models in a way that allows users to quickly visualize the key residue locations, which are likely to have been improved. This is important, as global refinement of a full chain model may not always be possible, whereas local regions, or individual domains, can often be much improved. Thus, users may easily compare the specific regions of the alternative refined models in which they are most interested e.g. key interaction sites or domains. ReFOLD was used to generate hundreds of alternative refined models for the CASP12 experiment, boosting our group's performance in the main tertiary structure prediction category. Our successful refinement of initial server models combined with our built-in ASEs were instrumental to our second place ranking on Template Based Modeling (TBM) and Free Modeling (FM)/TBM targets. The ReFOLD server is freely available at: http://www.reading.ac.uk/bioinf/ReFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Accurate Bond Lengths to Hydrogen Atoms from Single‐Crystal X‐ray Diffraction by Including Estimated Hydrogen ADPs and Comparison to Neutron and QM/MM Benchmarks

    PubMed Central

    Lübben, Jens; Mebs, Stefan; Wagner, Armin; Luger, Peter

    2017-01-01

    Abstract Amino acid structures are an ideal test set for method‐development studies in crystallography. High‐resolution X‐ray diffraction data for eight previously studied genetically encoding amino acids are provided, complemented by a non‐standard amino acid. Structures were re‐investigated to study a widely applicable treatment that permits accurate X−H bond lengths to hydrogen atoms to be obtained: this treatment combines refinement of positional hydrogen‐atom parameters with aspherical scattering factors with constrained “TLS+INV” estimated hydrogen anisotropic displacement parameters (H‐ADPs). Tabulated invariom scattering factors allow rapid modeling without further computations, and unconstrained Hirshfeld atom refinement provides a computationally demanding alternative when database entries are missing. Both should incorporate estimated H‐ADPs, as free refinement frequently leads to over‐parameterization and non‐positive definite H‐ADPs irrespective of the aspherical scattering model used. Using estimated H‐ADPs, both methods yield accurate and precise X−H distances in best quantitative agreement with neutron diffraction data (available for five of the test‐set molecules). This work thus solves the last remaining problem to obtain such results more frequently. Density functional theoretical QM/MM computations are able to play the role of an alternative benchmark to neutron diffraction. PMID:28295691

  2. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Synergetic use of Sentinel-1 and 2 to improve agro-hydrological modeling. Results of groundwater pumping estimates in south-India and nitrogen excess in south-west of France

    NASA Astrophysics Data System (ADS)

    Ferrant, S.; Le Page, M.; Kerr, Y. H.; Selles, A.; Mermoz, S.; Al-Bitar, A.; Muddu, S.; Gascoin, S.; Marechal, J. C.; Durand, P.; Salmon-Monviola, J.; Ceschia, E.; Bustillo, V.

    2016-12-01

    Nitrogen transfers at agricultural catchment level are intricately linked to water transfers. Agro-hydrological modeling approaches aim at integrating spatial heterogeneity of catchment physical properties together with agricultural practices to spatially estimate the water and nitrogen cycles. As in hydrology, the calibration schemes are designed to optimize the performance of the temporal dynamics and biases in model simulations, while ignoring the simulated spatial pattern. Yet, crop uses, i.e. transpiration and nitrogen exported by harvest, are the main fluxes at the catchment scale, highly variable in space and time. Geo-information time-series of vegetation and water index with multi-spectral optical detection S2 together with surface roughness time series with C-band radar detection S1 are used to reset soil water holding capacity parameters (depth, porosity) and agricultural practices (sowing date, irrigated area extent) of a crop model coupled with a hydrological model. This study takes two agro-hydrological contexts as demonstrators: 1-spatial nitrogen excess estimation in south-west of France, and 2-groundwater extraction for rice irrigation in south-India. Spatio-temporal patterns are involved in respectively surface water contamination due to over-fertilization and local groundwater shortages due to over-pumping for above rice inundation. Optimized Leaf Area Index profiles are simulated at the satellite images pixel level using an agro-hydrological model to reproduce spatial and temporal crop growth dynamics in south-west of France, improving the in-stream nitrogen fluxes by 12%. Accurate detection of irrigated area extents are obtained with the thresholding method based on optical indices, with a kappa of 0.81 for the dry season 2016. The actual monsoon season is monitored and will be presented. These extents drive the groundwater pumping and are highly variable in time (from 2 to 8% of the total area).

  4. Towards accurate estimates of the spin-state energetics of spin-crossover complexes within density functional theory: a comparative case study of cobalt(II) complexes.

    PubMed

    Vargas, Alfredo; Krivokapic, Itana; Hauser, Andreas; Lawson Daku, Latévi Max

    2013-03-21

    We report a detailed DFT study of the energetic and structural properties of the spin-crossover Co(ii) complex [Co(tpy)(2)](2+) (tpy = 2,2':6',2''-terpyridine) in the low-spin (LS) and the high-spin (HS) states, using several generalized gradient approximation and hybrid functionals. In either spin-state, the results obtained with the functionals are consistent with one another and in good agreement with available experimental data. Although the different functionals correctly predict the LS state as the electronic ground state of [Co(tpy)(2)](2+), they give estimates of the HS-LS zero-point energy difference which strongly depend on the functional used. This dependency on the functional was also reported for the DFT estimates of the zero-point energy difference in the HS complex [Co(bpy)(3)](2+) (bpy = 2,2'-bipyridine) [A. Vargas, A. Hauser and L. M. Lawson Daku, J. Chem. Theory Comput., 2009, 5, 97]. The comparison of the and estimates showed that all functionals correctly predict an increase of the zero-point energy difference upon the bpy → tpy ligand substitution, which furthermore weakly depends on the functionals, amounting to . From these results and basic thermodynamic considerations, we establish that, despite their limitations, current DFT methods can be applied to the accurate determination of the spin-state energetics of complexes of a transition metal ion, or of these complexes in different environments, provided that the spin-state energetics is accurately known in one case. Thus, making use of the availability of a highly accurate ab initio estimate of the HS-LS energy difference in the complex [Co(NCH)(6)](2+) [L. M. Lawson Daku, F. Aquilante, T. W. Robinson and A. Hauser, J. Chem. Theory Comput., 2012, 8, 4216], we obtain for [Co(tpy)(2)](2+) and [Co(bpy)(3)](2+) best estimates of and , in good agreement with the known magnetic behaviour of the two complexes.

  5. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  6. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations

    SciTech Connect

    McMillan, K; Bostani, M; McNitt-Gray, M; McCollough, C

    2015-06-15

    Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not

  7. Excessive use of Steroid Hormone & beneficial effects of True St. 36 acupuncture on malignant brain tumors--part I; how to estimate non-invasively presence of excess dose of Steroid Hormone in patients, baseball players & other professional athletes from its toxic effects on heart & pancreas, as well as persistent or recurrent infection--part II.

    PubMed

    Omura, Yoshiaki

    2005-01-01

    Using accurate organ representation areas map of the face, originally mapped by the author using Bi-Digital O-Ring Test Resonance Phenomena between two identical substances, one can make quick non-invasive screening of diseases by visual inspection, particularly if it is chronic degenerative disease, as they often develop deep crease or creases or discoloration on the pathological organ representation area. However, even if there are no visible abnormalities in the organ representation areas, the author found that when the individual is using excessive Steroid Hormones for malignant brain tumors, other medical purposes, and competitive sports, not only did the left ventricle and pancreas become very abnormal when examined by the Bi-Digital O-Ring Test, and Steroid Hormone accumulate in these organs with abnormally increased 8-OH-dG & TXB2, and Folic Acid & Telomere markedly reduce, but also the organ representation areas of the pancreas and left ventricle on the face showed similar abnormalities. Thus, using the Bi-Digital O-Ring Test, one can quickly and non-invasively screen the Steroid Hormone induced abnormalities of the heart and pancreas, and their organ representation areas of various parts of the body, including the face, tongue, ears, hands and feet. For malignant tumors including brain tumors, acupuncture on True ST. 36 or ST. 37 was found to be highly beneficial by reducing cancer cell telomere to practically 0, while increasing normal cell telomere moderately. The author's study over the past 15 years indicates that photographs of the human body, including pictures that appear in newspapers and magazines, have almost identical information as the information taken directly from the body surfaces of patients or individual athletes. Some examples of the application of this principle for the noninvasive estimation of the presence of Steroid Hormones using a photograph of the individual receiving the Steroid Hormone for medical reasons, or for the purpose of

  8. How accurate are adolescents in portion-size estimation using the computer tool Young Adolescents' Nutrition Assessment on Computer (YANA-C)?

    PubMed

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-06-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amounts of ten commonly consumed foods (breakfast cereals, French fries, pasta, rice, apple sauce, carrots and peas, crisps, creamy velouté, red cabbage, and peas). Two procedures were followed: (1) short-term recall: adolescents (n 73) self-served their usual portions of the ten foods and estimated the amounts later the same day; (2) real-time perception: adolescents (n 128) estimated two sets (different portions) of pre-weighed portions displayed near the computer. Self-served portions were, on average, 8 % underestimated; significant underestimates were found for breakfast cereals, French fries, peas, and carrots and peas. Spearman's correlations between the self-served and estimated weights varied between 0.51 and 0.84, with an average of 0.72. The kappa statistics were moderate (>0.4) for all but one item. Pre-weighed portions were, on average, 15 % underestimated, with significant underestimates for fourteen of the twenty portions. Photographs of food items can serve as a good aid in ranking subjects; however, to assess the actual intake at a group level, underestimation must be considered.

  9. Subcutaneous nerve activity is more accurate than the heart rate variability in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction

    PubMed Central

    Chan, Yi-Hsin; Tsai, Wei-Chung; Shen, Changyu; Han, Seongwook; Chen, Lan S.; Lin, Shien-Fong; Chen, Peng-Sheng

    2015-01-01

    Background We recently reported that subcutaneous nerve activity (SCNA) can be used to estimate sympathetic tone. Objectives To test the hypothesis that left thoracic SCNA is more accurate than heart rate variability (HRV) in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction (MI). Methods We used an implanted radiotransmitter to study left stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA), and thoracic SCNA in 9 dogs at baseline and up to 8 weeks after MI. HRV was determined based by time-domain, frequency-domain and non-linear analyses. Results The correlation coefficients between integrated SGNA and SCNA averaged 0.74 (95% confidence interval (CI), 0.41–1.06) at baseline and 0.82 (95% CI, 0.63–1.01) after MI (P<.05 for both). The absolute values of the correlation coefficients were significant larger than that between SGNA and HRV analysis based on time-domain, frequency-domain and non-linear analyses, respectively, at baseline (P<.05 for all) and after MI (P<.05 for all). There was a clear increment of SGNA and SCNA at 2, 4, 6 and 8 weeks after MI, while HRV parameters showed no significant changes. Significant circadian variations were noted in SCNA, SGNA and all HRV parameters at baseline and after MI, respectively. Atrial tachycardia (AT) episodes were invariably preceded by the SCNA and SGNA, which were progressively increased from 120th, 90th, 60th to 30th s before the AT onset. No such changes of HRV parameters were observed before AT onset. Conclusion SCNA is more accurate than HRV in estimating cardiac sympathetic tone in ambulatory dogs with MI. PMID:25778433

  10. Embedded fiber-optic sensing for accurate internal monitoring of cell state in advanced battery management systems part 2: Internal cell signals and utility for state estimation

    NASA Astrophysics Data System (ADS)

    Ganguli, Anurag; Saha, Bhaskar; Raghavan, Ajay; Kiesel, Peter; Arakaki, Kyle; Schuh, Andreas; Schwartz, Julian; Hegyi, Alex; Sommer, Lars Wilko; Lochbaum, Alexander; Sahu, Saroj; Alamgir, Mohamed

    2017-02-01

    A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic (FO) sensors. High-performance large-format pouch cells with embedded FO sensors were fabricated. This second part of the paper focuses on the internal signals obtained from these FO sensors. The details of the method to isolate intercalation strain and temperature signals are discussed. Data collected under various xEV operational conditions are presented. An algorithm employing dynamic time warping and Kalman filtering was used to estimate state-of-charge with high accuracy from these internal FO signals. Their utility for high-accuracy, predictive state-of-health estimation is also explored.

  11. A flexible and accurate method to estimate the mode and stability of spontaneous coordinated behaviors: The index-of-stability (IS) analysis.

    PubMed

    Zelic, Gregory; Varoqui, Deborah; Kim, Jeesun; Davis, Chris

    2017-02-24

    Patterns of coordination result from the interaction between (at least) two oscillatory components. This interaction is typically understood by means of two variables: the mode that expresses the shape of the interaction, and the stability that is the robustness of the interaction in this mode. A potent method of investigating coordinated behaviors is to examine the extent to which patterns of coordination arise spontaneously. However, a prominent issue faced by researchers is that, to date, no standard methods exist to fairly assess the stability of spontaneous coordination. In the present study, we introduce a new method called the index-of-stability (IS) analysis. We developed this method from the phase-coupling (PC) analysis that has been traditionally used for examining locomotion-respiration coordinated systems. We compared the extents to which both methods estimate the stability of simulated coordinated behaviors. Computer-generated time series were used to simulate the coordination of two rhythmic components according to a selected mode m:n and a selected degree of stability. The IS analysis was superior to the PC analysis in estimating the stability of spontaneous coordinated behaviors, in three ways: First, the estimation of stability itself was found to be more accurate and more reliable with the IS analysis. Second, the IS analysis is not constrained by the limitations of the PC analysis. Third and last, the IS analysis offers more flexibility, and so can be adapted according to the user's needs.

  12. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    NASA Astrophysics Data System (ADS)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  13. FastMG: a simple, fast, and accurate maximum likelihood procedure to estimate amino acid replacement rate matrices from large data sets.

    PubMed

    Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si

    2014-10-24

    Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.

  14. Optimal esophageal balloon volume for accurate estimation of pleural pressure at end-expiration and end-inspiration: an in vitro bench experiment.

    PubMed

    Yang, Yan-Lin; He, Xuan; Sun, Xiu-Mei; Chen, Han; Shi, Zhong-Hua; Xu, Ming; Chen, Guang-Qiang; Zhou, Jian-Xin

    2017-12-01

    Esophageal pressure, used as a surrogate for pleural pressure, is commonly measured by air-filled balloon, and the accuracy of measurement depends on the proper balloon volume. It has been found that larger filling volume is required at higher surrounding pressure. In the present study, we determined the balloon pressure-volume relationship in a bench model simulating the pleural cavity during controlled ventilation. The aim was to confirm whether an optimal balloon volume range existed that could provide accurate measurement at both end-expiration and end-inspiration. We investigated three esophageal balloons with different dimensions and materials: Cooper, SmartCath-G, and Microtek catheters. The balloon was introduced into a glass chamber simulating the pleural cavity and volume-controlled ventilation was initiated. The ventilator was set to obtain respective chamber pressures of 5 and 20 cmH2O during end-expiratory and end-inspiratory occlusion. Balloon was progressively inflated, and balloon pressure and chamber pressure were measured. Balloon transmural pressure was defined as the difference between balloon and chamber pressure. The balloon pressure-volume curve was fitted by sigmoid regression, and the minimal and maximal balloon volume accurately reflecting the surrounding pressure was estimated using the lower and upper inflection point of the fitted sigmoid curve. Balloon volumes at end-expiratory and end-inspiratory occlusion were explored, and the balloon volume range that provided accurate measurement at both phases was defined as the optimal filling volume. Sigmoid regression of the balloon pressure-volume curve was justified by the dimensionless variable fitting and residual distribution analysis. All balloon transmural pressures were within ±1.0 cmH2O at the minimal and maximal balloon volumes. The minimal and maximal balloon volumes during end-inspiratory occlusion were significantly larger than those during end-expiratory occlusion, except for

  15. Accurate Equilibrium Structures for trans-HEXATRIENE by the Mixed Estimation Method and for the Three Isomers of Octatetraene from Theory; Structural Consequences of Electron Delocalization

    NASA Astrophysics Data System (ADS)

    Craig, Norman C.; Demaison, Jean; Groner, Peter; Rudolph, Heinz Dieter; Vogt, Natalja

    2015-06-01

    An accurate equilibrium structure of trans-hexatriene has been determined by the mixed estimation method with rotational constants from 8 deuterium and carbon isotopologues and high-level quantum chemical calculations. In the mixed estimation method bond parameters are fit concurrently to moments of inertia of various isotopologues and to theoretical bond parameters, each data set carrying appropriate uncertainties. The accuracy of this structure is 0.001 Å and 0.1°. Structures of similar accuracy have been computed for the cis,cis, trans,trans, and cis,trans isomers of octatetraene at the CCSD(T) level with a basis set of wCVQZ(ae) quality adjusted in accord with the experience gained with trans-hexatriene. The structures are compared with butadiene and with cis-hexatriene to show how increasing the length of the chain in polyenes leads to increased blurring of the difference between single and double bonds in the carbon chain. In trans-hexatriene r(“C_1=C_2") = 1.339 Å and r(“C_3=C_4") = 1.346 Å compared to 1.338 Å for the “double" bond in butadiene; r(“C_2-C_3") = 1.449 Å compared to 1.454 Å for the “single" bond in butadiene. “Double" bonds increase in length; “single" bonds decrease in length.

  16. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE PAGES

    An, Zhe; Rey, Daniel; Ye, Jingxin; ...

    2017-01-16

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  17. Applying the quarter-hour rule: can people with insomnia accurately estimate 15-min periods during the sleep-onset phase?

    PubMed

    Harrow, Lisa; Espie, Colin

    2010-03-01

    The 'quarter-hour rule' (QHR) instructs the person with insomnia to get out of bed after 15 min of wakefulness and return to bed only when sleep feels imminent. Recent research has identified that sleep can be significantly improved using this simple intervention (Malaffo and Espie, Sleep, 27(s), 2004, 280; Sleep, 29 (s), 2006, 257), but successful implementation depends on estimating time without clock monitoring, and the insomnia literature indicates poor time perception is a maintaining factor in primary insomnia (Harvey, Behav. Res. Ther., 40, 2002, 869). This study expands upon previous research with the aim of identifying whether people with insomnia can accurately perceive a 15-min interval during the sleep-onset period, and therefore successfully implements the QHR. A mixed models anova design was applied with between-participants factor of group (insomnia versus good sleepers) and within-participants factor of context (night versus day). Results indicated no differences between groups and contexts on time estimation tasks. This was despite an increase in arousal in the night context for both groups, and tentative support for the impact of arousal in inducing underestimations of time. These results provide promising support for the successful application of the QHR in people with insomnia. The results are discussed in terms of whether the design employed successfully accessed the processes that are involved in distorting time perception in insomnia. Suggestions for future research are provided and limitations of the current study discussed.

  18. Accurate estimation of seismic source parameters of induced seismicity by a combined approach of generalized inversion and genetic algorithm: Application to The Geysers geothermal area, California

    NASA Astrophysics Data System (ADS)

    Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.

    2017-05-01

    The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.

  19. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography: comparison with cine magnetic resonance imaging.

    PubMed

    Belge, Bénédicte; Coche, Emmanuel; Pasquet, Agnès; Vanoverschelde, Jean-Louis J; Gerber, Bernhard L

    2006-07-01

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134+/-51 and 67+/-56 ml) were similar to those by MR (137+/-57 and 70+/-60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55+/-21 vs. 56+/-21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3+/-1.8 vs. 8.8+/-1.9 mm and 12.7+/-3.4 vs. 13.3+/-3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54+/-30 vs. 51+/-31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR.

  20. Summary Report on the Graded Prognostic Assessment: An Accurate and Facile Diagnosis-Specific Tool to Estimate Survival for Patients With Brain Metastases

    PubMed Central

    Sperduto, Paul W.; Kased, Norbert; Roberge, David; Xu, Zhiyuan; Shanley, Ryan; Luo, Xianghua; Sneed, Penny K.; Chao, Samuel T.; Weil, Robert J.; Suh, John; Bhatt, Amit; Jensen, Ashley W.; Brown, Paul D.; Shih, Helen A.; Kirkpatrick, John; Gaspar, Laurie E.; Fiveash, John B.; Chiang, Veronica; Knisely, Jonathan P.S.; Sperduto, Christina Maria; Lin, Nancy; Mehta, Minesh

    2012-01-01

    Purpose Our group has previously published the Graded Prognostic Assessment (GPA), a prognostic index for patients with brain metastases. Updates have been published with refinements to create diagnosis-specific Graded Prognostic Assessment indices. The purpose of this report is to present the updated diagnosis-specific GPA indices in a single, unified, user-friendly report to allow ease of access and use by treating physicians. Methods A multi-institutional retrospective (1985 to 2007) database of 3,940 patients with newly diagnosed brain metastases underwent univariate and multivariate analyses of prognostic factors associated with outcomes by primary site and treatment. Significant prognostic factors were used to define the diagnosis-specific GPA prognostic indices. A GPA of 4.0 correlates with the best prognosis, whereas a GPA of 0.0 corresponds with the worst prognosis. Results Significant prognostic factors varied by diagnosis. For lung cancer, prognostic factors were Karnofsky performance score, age, presence of extracranial metastases, and number of brain metastases, confirming the original Lung-GPA. For melanoma and renal cell cancer, prognostic factors were Karnofsky performance score and the number of brain metastases. For breast cancer, prognostic factors were tumor subtype, Karnofsky performance score, and age. For GI cancer, the only prognostic factor was the Karnofsky performance score. The median survival times by GPA score and diagnosis were determined. Conclusion Prognostic factors for patients with brain metastases vary by diagnosis, and for each diagnosis, a robust separation into different GPA scores was discerned, implying considerable heterogeneity in outcome, even within a single tumor type. In summary, these indices and related worksheet provide an accurate and facile diagnosis-specific tool to estimate survival, potentially select appropriate treatment, and stratify clinical trials for patients with brain metastases. PMID:22203767

  1. Multiple automated headspace in-tube extraction for the accurate analysis of relevant wine aroma compounds and for the estimation of their relative liquid-gas transfer rates.

    PubMed

    Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente

    2012-11-30

    An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. High Resolution Infrared Spectroscopy in the 1200--1300 cm-1 Region and Accurate Theoretical Estimates for the Structure and Ring-Puckering Barrier of Perfluorocyclobutane

    SciTech Connect

    Blake, Thomas A; Glendening, Eric D; Sams, Robert L; Sharpe, Steven W; Xantheas, Sotiris S

    2007-11-08

    We present experimental infrared (IR) spectra and theoretical electronic structure results for the geometry, anharmonic vibrational frequencies and accurate estimates of the magnitude and the origin of the ring puckering barrier in C4F8. High-resolution (0.0015 cm-1) spectra of the ν12 and ν13 parallel bands of perfluorocyclobutane (c-C4F8) were recorded for the fist time by expanding a 10% c-C4F8 in helium mixture in a supersonic jet. Both bands are observed to be rotationally resolved in a jet with a rotational temperature of 15 K. The ν12 mode has b2 symmetry under D2d that correlates to a2u symmetry under D4h and consequently has ± ← ± ring puckering selection rules. A rigid rotor fit of the ν12 band yields the origin at 1292.56031(2) cm-1 with B' = 0.0354137(3) cm-1 and B" = 0.0354363(3) cm-1. The ν13 mode is of b2 symmetry under D2d that correlates to b2g under D4h and in this case the ring puckering selection rules are ± ! m. Rotational transitions from the ground and first excited torsional states will be separated by the torsional splitting in the ground and excited vibrational states and indeed we observe a splitting of each transition into strong and weak intensity components with a separation of approximately 0.0018 cm-1. The strong and weak sets of transitions were fit separately again using a rigid rotor model to give ν13(strong) = 1240.34858(4) cm-1, B' = 0.0354192(7) cm-1 and B" = 0.0354355(7) cm-1 and ν13(weak) = 1240.34674(5) cm-1, B' = 0.0354188(9) cm-1 and B" = 0.0354360(7) cm-1. High level electronic structure calculations at the MP2 and CCSD(T) levels of theory with the family of correlation consistent basis sets of quadruple-ζ quality, developed by Dunning and coworkers, yield best estimates for the vibrationally averaged structural parameters r(C-C)=1.568 Å, r(C-F)α=1.340 Å, r(C-F)β=1.329 Å, α(F-C-F)=110.3°, θz(C-C-C)=89.1° and δ(C-C-CC)=14.6° and rotational constants of A=B=0.03543 cm-1, C=0.02898 cm-1, the latter

  3. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  4. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  5. Identification of an accurate soil suspension/dispersion modeling method for use in estimating health-based soil cleanup levels of hexavalent chromium in chromite ore processing residues.

    PubMed

    Scott, P K; Finley, B L; Sung, H M; Schulze, R H; Turner, D B

    1997-07-01

    The primary health concern associated with chromite ore processing residues (COPR) at sites in Hudson County, NJ, is the inhalation of Cr(VI) suspended from surface soils. Since health-based soil standards for Cr(VI) will be derived using the inhalation pathway, soil suspension modeling will be necessary to estimate site-specific, health-based soil cleanup levels (HBSCLs). The purpose of this study was to identify the most appropriate particulate emission and air dispersion models for estimating soil suspension at these sites based on their theoretical underpinnings, scientific acceptability, and past performance. The identified modeling approach, the AP-42 particulate emission model and the fugitive dust model (FDM), was used to calculate concentrations of airborne Cr(VI) and TSP at two COPR sites. These estimated concentrations were then compared to concentrations measured at each site. The TSP concentrations calculated using the AP-42/FDM soil suspension modeling approach were all within a factor of 3 of the measured concentrations. The majority of the estimated air concentrations were greater than the measured, indicating that the AP-42/FDM approach tends to overestimate on-site concentrations. The site-specific Cr(VI) HBSCLs for these two sites calculated using this conservative soil suspension modeling approach ranged from 190 to 420 mg/kg.

  6. Publication Bias Currently Makes an Accurate Estimate of the Benefits of Enrichment Programs Difficult: A Postmortem of Two Meta-Analyses Using Statistical Power Analysis

    ERIC Educational Resources Information Center

    Warne, Russell T.

    2016-01-01

    Recently Kim (2016) published a meta-analysis on the effects of enrichment programs for gifted students. She found that these programs produced substantial effects for academic achievement (g = 0.96) and socioemotional outcomes (g = 0.55). However, given current theory and empirical research these estimates of the benefits of enrichment programs…

  7. Estimating Accurate Relative Spacecraft Angular Position from Deep Space Network Very Long Baseline Interferometry Phases Using X-Band Telemetry or Differential One-Way Ranging Tones

    NASA Astrophysics Data System (ADS)

    Bagri, D. S.; Majid, W. A.

    2008-02-01

    At present spacecraft angular position with the Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometry (VLBI) phase measurements employing differential one-way ranging (DOR) tones. Group delay measurements require high signal-to-noise ratio (SNR) to provide modest angular position accuracy. On the other hand, VLBI phases with modest SNR can be used to determine the position of a spacecraft with high accuracy, except for the interferometer interference fringe cycle ambiguity, which can be resolved using multiple baselines, requiring several antenna stations as is done, for example, using the Very Long Baseline Array (VLBA) (e.g, the VLBA has 10 antenna stations). As an alternative to this approach, here we propose estimating the position of a spacecraft to half-a-fringe-cycle accuracy using time variations between measured and calculated phases, using DSN VLBI baseline(s), as the Earth rotates (i.e., estimate position offset from the difference between observed and calculated phases for different spatial frequency (U,V) values). Combining the fringe location of the target with the phase information allows for estimate of spacecraft angular position to a high accuracy. One of the advantages of this scheme, in addition to the possibility of achieving a fraction of a nanoradian measurement accuracy using DSN antennas for VLBI, is that it is possible to use telemetry signals with at least a 4 to 8 Msamples/s data rate (bandwidth greater than about 8 to 16 MHz) to measure spacecraft angular position instead of using DOR tones, as is currently done. Using telemetry instead of DOR tones will eliminate the need for spacecraft coordination for angular position measurements and will minimize calibration errors due to instrumental dispersion effects.

  8. Alterations of musculoskeletal models for a more accurate estimation of lower limb joint contact forces during normal gait: A systematic review.

    PubMed

    Moissenet, F; Modenese, L; Dumas, R

    2017-09-01

    Musculoskeletal modelling is a methodology used to investigate joint contact forces during a movement. High accuracy in the estimation of the hip or knee joint contact forces can be obtained with subject-specific models. However, construction of subject-specific models remains time consuming and expensive. The purpose of this systematic review of the literature was to identify what alterations can be made on generic (i.e. literature-based, without any subject-specific measurement other than body size and weight) musculoskeletal models to obtain a better estimation of the joint contact forces. The impact of these alterations on the accuracy of the estimated joint contact forces were appraised. The systematic search yielded to 141 articles and 24 papers were included in the review. Different strategies of alterations were found: skeletal and joint model (e.g. number of degrees of freedom, knee alignment), muscle model (e.g. Hill-type muscle parameters, level of muscular redundancy), and optimisation problem (e.g. objective function, design variables, constraints). All these alterations had an impact on joint contact force accuracy, so demonstrating the potential for improving the model predictions without necessarily involving costly and time consuming medical images. However, due to discrepancies in the reported evidence about this impact and despite a high quality of the reviewed studies, it was not possible to highlight any trend defining which alteration had the largest impact. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Problems of Excess Capacity

    NASA Technical Reports Server (NTRS)

    Douglas, G.

    1972-01-01

    The problems of excess capacity in the airline industry are discussed with focus on the following topics: load factors; fair rate of return on investment; service-quality rivalry among airlines; pricing (fare) policies; aircraft production; and the impacts of excess capacity on operating costs. Also included is a discussion of the interrelationships among these topics.

  10. Excessive Acquisition in Hoarding

    PubMed Central

    Frost, Randy O.; Tolin, David F.; Steketee, Gail; Fitch, Kristin E.; Selbo-Bruns, Alexandra

    2009-01-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms. PMID:19261435

  11. Excessive acquisition in hoarding.

    PubMed

    Frost, Randy O; Tolin, David F; Steketee, Gail; Fitch, Kristin E; Selbo-Bruns, Alexandra

    2009-06-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an Internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms.

  12. Estimation of cardiovascular risk on routine chest CT: Ordinal coronary artery calcium scoring as an accurate predictor of Agatston score ranges.

    PubMed

    Azour, Lea; Kadoch, Michael A; Ward, Thomas J; Eber, Corey D; Jacobi, Adam H

    Coronary artery calcium (CAC) is often identified on routine chest computed tomography (CT). The purpose of our study was to evaluate whether ordinal scoring of CAC on non-gated, routine chest CT is an accurate predictor of Agatston score ranges in a community-based population, and in particular to determine the accuracy of an ordinal score of zero on routine chest CT. Two thoracic radiologists reviewed consecutive same-day ECG-gated and routine non-gated chest CT scans of 222 individuals. CAC was quantified using the Agatston scoring on the ECG-gated scans, and using an ordinal method on routine scans, with a score from 0 to 12. The pattern and distribution of CAC was assessed. The correlation between routine exam ordinal scores and Agatston scores in ECG-gated exams, as well as the accuracy of assigning a zero calcium score on routine chest CT was determined. CAC was most prevalent in the left anterior descending coronary artery in both single and multi-vessel coronary artery disease. There was a strong correlation between the non-gated ordinal and ECG-gated Agatston scores (r = 0.811, p < 0.01). Excellent inter-reader agreement (k = 0.95) was shown for the presence (total ordinal score ≥1) or absence (total ordinal score = 0) of CAC on routine chest CT. The negative predictive value for a total ordinal score of zero on routine CT was 91.6% (95% CI, 85.1-95.9). Total ordinal scores of 0, 1-3, 4-5, and ≥6 corresponded to average Agatston scores of 0.52 (0.3-0.8), 98.7 (78.2-117.1), 350.6 (264.9-436.3) and 1925.4 (1526.9-2323.9). Visual assessment of CAC on non-gated routine chest CT accurately predicts Agatston score ranges, including the zero score, in ECG-gated CT. Inclusion of this information in radiology reports may be useful to convey important information on cardiovascular risk, particularly premature atherosclerosis in younger patients. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights

  13. Eclipsing Binaries as Astrophysical Laboratories: CM Draconis - Accurate Absolute Physical Properties of Low Mass Stars and an Independent Estimate of the Primordial Helium Abundance

    NASA Astrophysics Data System (ADS)

    McCook, G. P.; Guinan, E. F.; Saumon, D.; Kang, Y. W.

    1997-05-01

    CM Draconis (Gl 630.1; Vmax = +12.93) is an important eclipsing binary consisting of two dM4.5e stars with an orbital period of 1.2684 days. This binary is a high velocity star (s= 164 km/s) and the brighter member of a common proper motion pair with a cool faint white dwarf companion (LP 101-16). CM Dra and its white dwarf companion were once considered by Zwicky to belong to a class of "pygmy stars", but they turned out to be ordinary old, cool white dwarfs or faint red dwarfs. Lacy (ApJ 218,444L) determined the first orbital and physical properties of CM Dra from the analysis of his light and radial velocity curves. In addition to providing directly measured masses, radii, and luminosities for low mass stars, CM Dra was also recognized by Lacy and later by Paczynski and Sienkiewicz (ApJ 286,332) as an important laboratory for cosmology, as a possible old Pop II object where it may be possible to determine the primordial helium abundance. Recently, Metcalfe et al.(ApJ 456,356) obtained accurate RV measures for CM Dra and recomputed refined elements along with its helium abundance. Starting in 1995, we have been carrying out intensive RI photoelectric photometry of CM Dra to obtain well defined, accurate light curves so that its fundamental properties can be improved, and at the same time, to search for evidence of planets around the binary from planetary transit eclipses. During 1996 and 1997 well defined light curves were secured and these were combined with the RV measures of Metcalfe et al. (1996) to determine the orbital and physical parameters of the system, including a refined orbital period. A recent version of the Wilson-Devinney program was used to analyze the data. New radii, masses, mean densities, Teff, and luminosities were found as well as a re-determination of the helium abundance (Y). The results of the recent analyses of the light and RV curves will be presented and modelling results discussed. This research is supported by NSF grants AST-9315365

  14. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    PubMed

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air.

  15. Is the predicted postoperative FEV1 estimated by planar lung perfusion scintigraphy accurate in patients undergoing pulmonary resection? Comparison of two processing methods.

    PubMed

    Caglar, Meltem; Kara, Murat; Aksoy, Tamer; Kiratli, Pinar Ozgen; Karabulut, Erdem; Dogan, Riza

    2010-07-01

    Estimation of postoperative forced expiratory volume in 1 s (FEV1) with radionuclide lung scintigraphy is frequently used to define functional operability in patients undergoing lung resection. We conducted a study to outline the reliability of planar quantitative lung perfusion scintigraphy (QLPS) with two different processing methods to estimate the postoperative lung function in patients with resectable lung disease. Forty-one patients with a mean age of 57 +/- 12 years who underwent either a pneumonectomy (n = 14) or a lobectomy (n = 27) were included in the study. QLPS with Tc-99m macroaggregated albumin was performed. Both three equal zones were generated for each lung [zone method (ZM)] and more precise regions of interest were drawn according to their anatomical shape in the anterior and posterior projections [lobe mapping method (LMM)] for each patient. The predicted postoperative (ppo) FEV1 values were compared with actual FEV1 values measured on postoperative day 1 (pod1 FEV1) and day 7 (pod 7 FEV1). The mean of preoperative FEV1 and ppoFEV1 values was 2.10 +/- 0.57 and 1.57 +/- 0.44 L, respectively. The mean of Pod1FEV1 (1.04 +/- 0.30 L) was lower than ppoFEV1 (p < 0.0001) but increased on day 7 (1.31 +/- 0.32 L) (p < 0.0001); however, it never reached the predicted values. Zone and LMMs estimated mean ppoFEV1 as 1.56 +/- 0.45 and 1.57 +/- 0.44 L, respectively. Both methods overestimated the actual value by 50% (ZM), 51% (LMM) and 19% (ZM), 20% (LMM) for pod 1 and pod 7, respectively. This overestimation was more pronounced in patients with chronic lung disease and hilar tumors. No significant differences were observed between ppoFEV1 values estimated by ZM or by LMM (p > 0.05). PpoFEV1 values predicted by both the zone and LMMs overestimated the actual measured lung volumes in patients undergoing pulmonary resection in the early postoperative period. LMM is not superior to ZM.

  16. Arm span and ulnar length are reliable and accurate estimates of recumbent length and height in a multiethnic population of infants and children under 6 years of age.

    PubMed

    Forman, Michele R; Zhu, Yeyi; Hernandez, Ladia M; Himes, John H; Dong, Yongquan; Danish, Robert K; James, Kyla E; Caulfield, Laura E; Kerver, Jean M; Arab, Lenore; Voss, Paula; Hale, Daniel E; Kanafani, Nadim; Hirschfeld, Steven

    2014-09-01

    Surrogate measures are needed when recumbent length or height is unobtainable or unreliable. Arm span has been used as a surrogate but is not feasible in children with shoulder or arm contractures. Ulnar length is not usually impaired by joint deformities, yet its utility as a surrogate has not been adequately studied. In this cross-sectional study, we aimed to examine the accuracy and reliability of ulnar length measured by different tools as a surrogate measure of recumbent length and height. Anthropometrics [recumbent length, height, arm span, and ulnar length by caliper (ULC), ruler (ULR), and grid (ULG)] were measured in 1479 healthy infants and children aged <6 y across 8 study centers in the United States. Multivariate mixed-effects linear regression models for recumbent length and height were developed by using ulnar length and arm span as surrogate measures. The agreement between the measured length or height and the predicted values by ULC, ULR, ULG, and arm span were examined by Bland-Altman plots. All 3 measures of ulnar length and arm span were highly correlated with length and height. The degree of precision of prediction equations for length by ULC, ULR, and ULG (R(2) = 0.95, 0.95, and 0.92, respectively) was comparable with that by arm span (R(2) = 0.97) using age, sex, and ethnicity as covariates; however, height prediction by ULC (R(2) = 0.87), ULR (R(2) = 0.85), and ULG (R(2) = 0.88) was less comparable with arm span (R(2) = 0.94). Our study demonstrates that arm span and ULC, ULR, or ULG can serve as accurate and reliable surrogate measures of recumbent length and height in healthy children; however, ULC, ULR, and ULG tend to slightly overestimate length and height in young infants and children. Further testing of ulnar length as a surrogate is warranted in physically impaired or nonambulatory children.

  17. Results from the HARPS-N 2014 Campaign to Estimate Accurately the Densities of Planets Smaller than 2.5 Earth Radii

    NASA Astrophysics Data System (ADS)

    Charbonneau, David; Harps-N Collaboration

    2015-01-01

    Although the NASA Kepler Mission has determined the physical sizes of hundreds of small planets, and we have in many cases characterized the star in detail, we know virtually nothing about the planetary masses: There are only 7 planets smaller than 2.5 Earth radii for which there exist published mass estimates with a precision better than 20 percent, the bare minimum value required to begin to distinguish between different models of composition.HARPS-N is an ultra-stable fiber-fed high-resolution spectrograph optimized for the measurement of very precise radial velocities. We have 80 nights of guaranteed time per year, of which half are dedicated to the study of small Kepler planets.In preparation for the 2014 season, we compared all available Kepler Objects of Interest to identify the ones for which our 40 nights could be used most profitably. We analyzed the Kepler light curves to constrain the stellar rotation periods, the lifetimes of active regions on the stellar surface, and the noise that would result in our radial velocities. We assumed various mass-radius relations to estimate the observing time required to achieve a mass measurement with a precision of 15%, giving preference to stars that had been well characterized through asteroseismology. We began by monitoring our long list of targets. Based on preliminary results we then selected our final short list, gathering typically 70 observations per target during summer 2014.These resulting mass measurements will have a signifcant impact on our understanding of these so-called super-Earths and small Neptunes. They would form a core dataset with which the international astronomical community can meaningfully seek to understand these objects and their formation in a quantitative fashion.HARPS-N was funded by the Swiss Space Office, the Harvard Origin of Life Initiative, the Scottish Universities Physics Alliance, the University of Geneva, the Smithsonian Astrophysical Observatory, the Italian National

  18. Hyperhidrosis (Excessive Sweating)

    MedlinePlus

    ... a cause (Alzheimer’s Association) Iontophoresis (the no-sweat machine) If excessive sweating affects your hands, feet, or ... this is an option, the dermatologist uses a machine that emits electromagnetic energy. This energy destroys the ...

  19. How accurate is pulse rate variability as an estimate of heart rate variability? A review on studies comparing photoplethysmographic technology with an electrocardiogram.

    PubMed

    Schäfer, Axel; Vagedes, Jan

    2013-06-05

    The usefulness of heart rate variability (HRV) as a clinical research and diagnostic tool has been verified in numerous studies. The gold standard technique comprises analyzing time series of RR intervals from an electrocardiographic signal. However, some authors have used pulse cycle intervals instead of RR intervals, as they can be determined from a pulse wave (e.g. a photoplethysmographic) signal. This option is often called pulse rate variability (PRV), and utilizing it could expand the serviceability of pulse oximeters or simplify ambulatory monitoring of HRV. We review studies investigating the accuracy of PRV as an estimate of HRV, regardless of the underlying technology (photoplethysmography, continuous blood pressure monitoring or Finapresi, impedance plethysmography). Results speak in favor of sufficient accuracy when subjects are at rest, although many studies suggest that short-term variability is somewhat overestimated by PRV, which reflects coupling effects between respiration and the cardiovascular system. Physical activity and some mental stressors seem to impair the agreement of PRV and HRV, often to an inacceptable extent. Findings regarding the position of the sensor or the detection algorithm are not conclusive. Generally, quantitative conclusions are impeded by the fact that results of different studies are mostly incommensurable due to diverse experimental settings and/or methods of analysis. Copyright © 2012. Published by Elsevier Ireland Ltd.

  20. An accurate density functional theory based estimation of pK(a) values of polar residues combined with experimental data: from amino acids to minimal proteins.

    PubMed

    Matsui, Toru; Baba, Takeshi; Kamiya, Katsumasa; Shigeta, Yasuteru

    2012-03-28

    We report a scheme for estimating the acid dissociation constant (pK(a)) based on quantum-chemical calculations combined with a polarizable continuum model, where a parameter is determined for small reference molecules. We calculated the pK(a) values of variously sized molecules ranging from an amino acid to a protein consisting of 300 atoms. This scheme enabled us to derive a semiquantitative pK(a) value of specific chemical groups and discuss the influence of the surroundings on the pK(a) values. As applications, we have derived the pK(a) value of the side chain of an amino acid and almost reproduced the experimental value. By using our computing schemes, we showed the influence of hydrogen bonds on the pK(a) values in the case of tripeptides, which decreases the pK(a) value by 3.0 units for serine in comparison with those of the corresponding monopeptides. Finally, with some assumptions, we derived the pK(a) values of tyrosines and serines in chignolin and a tryptophan cage. We obtained quite different pK(a) values of adjacent serines in the tryptophan cage; the pK(a) value of the OH group of Ser13 exposed to bulk water is 14.69, whereas that of Ser14 not exposed to bulk water is 20.80 because of the internal hydrogen bonds.

  1. Three calibration factors, applied to a rapid sweeping method, can accurately estimate Aedes aegypti (Diptera: Culicidae) pupal numbers in large water-storage containers at all temperatures at which dengue virus transmission occurs.

    PubMed

    Romero-Vivas, C M E; Llinás, H; Falconar, A K I

    2007-11-01

    The ability of a simple sweeping method, coupled to calibration factors, to accurately estimate the total numbers of Aedes aegypti (L.) (Diptera: Culicidae) pupae in water-storage containers (20-6412-liter capacities at different water levels) throughout their main dengue virus transmission temperature range was evaluated. Using this method, one set of three calibration factors were derived that could accurately estimate the total Ae. aegypti pupae in their principal breeding sites, large water-storage containers, found throughout the world. No significant differences were obtained using the method at different altitudes (14-1630 m above sea level) that included the range of temperatures (20-30 degrees C) at which dengue virus transmission occurs in the world. In addition, no significant differences were found in the results obtained between and within the 10 different teams that applied this method; therefore, this method was extremely robust. One person could estimate the Ae. aegypti pupae in each of the large water-storage containers in only 5 min by using this method, compared with two people requiring between 45 and 90 min to collect and count the total pupae population in each of them. Because the method was both rapid to perform and did not disturb the sediment layers in these domestic water-storage containers, it was more acceptable by the residents, and, therefore, ideally suited for routine surveillance purposes and to assess the efficacy of Ae. aegypti control programs in dengue virus-endemic areas throughout the world.

  2. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  3. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    traditionally used to estimate spinal cord NTCP may not apply to the dosimetry of SRS. Further research with additional NTCP models is needed.

  4. Symptom profiles of subsyndromal depression in disease clusters of diabetes, excess weight, and progressive cerebrovascular conditions: a promising new type of finding from a reliable innovation to estimate exhaustively specified multiple indicators–multiple causes (MIMIC) models

    PubMed Central

    Francoeur, Richard B

    2016-01-01

    Addressing subsyndromal depression in cerebrovascular conditions, diabetes, and obesity reduces morbidity and risk of major depression. However, depression may be masked because self-reported symptoms may not reveal dysphoric (sad) mood. In this study, the first wave (2,812 elders) from the New Haven Epidemiological Study of the Elderly (EPESE) was used. These population-weighted data combined a stratified, systematic, clustered random sample from independent residences and a census of senior housing. Physical conditions included progressive cerebrovascular disease (CVD; hypertension, silent CVD, stroke, and vascular cognitive impairment [VCI]) and co-occurring excess weight and/or diabetes. These conditions and interactions (clusters) simultaneously predicted 20 depression items and a latent trait of depression in participants with subsyndromal (including subthreshold) depression (11≤ Center for Epidemiologic Studies Depression Scale [CES-D] score ≤27). The option for maximum likelihood estimation with standard errors that are robust to non-normality and non-independence in complex random samples (MLR) in Mplus and an innovation created by the author were used for estimating unbiased effects from latent trait models with exhaustive specification. Symptom profiles reveal masked depression in 1) older males, related to the metabolic syndrome (hypertension–overweight–diabetes; silent CVD–overweight; and silent CVD–diabetes) and 2) older females or the full sample, related to several diabetes and/or overweight clusters that involve stroke or VCI. Several other disease clusters are equivocal regarding masked depression; a couple do emphasize dysphoric mood. Replicating findings could identify subgroups for cost-effective screening of subsyndromal depression. PMID:28003768

  5. Nonaccommodative convergence excess.

    PubMed

    von Noorden, G K; Avilla, C W

    1986-01-15

    Nonaccommodative convergence excess is a condition in which a patient has orthotropia or a small-angle esophoria or esotropia at distance and a large-angle esotropia at near, not significantly reduced by the addition of spherical plus lenses. The AC/A ratio, determined with the gradient method, is normal or subnormal. Tonic convergence is suspected of causing the convergence excess in these patients. Nonaccommodative convergence excess must be distinguished from esotropia with a high AC/A ratio and from hypoaccommodative esotropia. In 24 patients treated with recession of both medial recti muscles with and without posterior fixation or by posterior fixation alone, the mean correction of esotropia was 7.4 prism diopters at distance and 17 prism diopters at near.

  6. [Excessive daytime sleepiness].

    PubMed

    Bittencourt, Lia Rita Azeredo; Silva, Rogério Santos; Santos, Ruth Ferreira; Pires, Maria Laura Nogueira; Mello, Marco Túlio de

    2005-05-01

    Sleepiness is a physiological function, and can be defined as increased propension to fall asleep. However, excessive sleepiness (ES) or hypersomnia refer to an abnormal increase in the probability to fall asleep, to take involuntary naps, or to have sleep atacks, when sleep is not desired. The main causes of excessive sleepiness is chronic sleep deprivation, sleep apnea syndrome, narcolepsy, movement disorders during sleep, circadian sleep disorders, use of drugs and medications, or idiopathic hypersomnia. Social, familial, work, and cognitive impairment are among the consequences of hypersomnia. Moreover, it has also been reported increased risk of accidents. The treatment of excessive sleepiness includes treating the primary cause, whenever identified. Sleep hygiene for sleep deprivation, positive pressure (CPAP) for sleep apnea, dopaminergic agents and exercises for sleep-related movement disorders, phototherapy and/or melatonin for circadian disorders, and use of stimulants are the treatment modalities of first choice.

  7. Addiction as excessive appetite.

    PubMed

    Orford, J

    2001-01-01

    The excessive appetite model of addiction is summarized. The paper begins by considering the forms of excessive appetite which a comprehensive model should account for: principally, excessive drinking, smoking, gambling, eating, sex and a diverse range of drugs including at least heroin, cocaine and cannabis. The model rests, therefore, upon a broader concept of what constitutes addiction than the traditional, more restricted, and arguably misleading definition. The core elements of the model include: very skewed consumption distribution curves; restraint, control or deterrence; positive incentive learning mechanisms which highlight varied forms of rapid emotional change as rewards, and wide cue conditioning; complex memory schemata; secondary, acquired emotional regulation cycles, of which 'chasing', 'the abstinence violation effect' and neuroadaptation are examples; and the consequences of conflict. These primary and secondary processes, occurring within diverse sociocultural contexts, are sufficient to account for the development of a strong attachment to an appetitive activity, such that self-control is diminished, and behaviour may appear to be disease-like. Giving up excess is a natural consequence of conflict arising from strong and troublesome appetite. There is much supportive evidence that change occurs outside expert treatment, and that when it occurs within treatment the change processes are more basic and universal than those espoused by fashionable expert theories.

  8. Software Estimation: Developing an Accurate, Reliable Method

    DTIC Science & Technology

    2011-08-01

    level 5 organizations. Defects identified here for CMM level 1 and level 5 are captured from Capers Jones who has identified software delivered... Capers , “Software Assessments, Benchmarks, and Best Practices”, Addison-Wesley Professional, April 2000. 1. At the AV-8B Joint System Support

  9. The otherness of sexuality: excess.

    PubMed

    Stein, Ruth

    2008-03-01

    The present essay, the second of a series of three, aims at developing an experience-near account of sexuality by rehabilitating the idea of excess and its place in sexual experience. It is suggested that various types of excess, such as excess of excitation (Freud), the excess of the other (Laplanche), excess beyond symbolization and the excess of the forbidden object of desire (Leviticus; Lacan) work synergistically to constitute the compelling power of sexuality. In addition to these notions, further notions of excess touch on its transformative potential. Such notions address excess that shatters psychic structures and that is actively sought so as to enable new ones to evolve (Bersani). Work is quoted that regards excess as a way of dealing with our lonely, discontinuous being by using the "excessive" cosmic energy circulating through us to achieve continuity against death (Bataille). Two contemporary analytic thinkers are engaged who deal with the object-relational and intersubjective vicissitudes of excess.

  10. Excess flow shutoff valve

    DOEpatents

    Kiffer, Micah S.; Tentarelli, Stephen Clyde

    2016-02-09

    Excess flow shutoff valve comprising a valve body, a valve plug, a partition, and an activation component where the valve plug, the partition, and activation component are disposed within the valve body. A suitable flow restriction is provided to create a pressure difference between the upstream end of the valve plug and the downstream end of the valve plug when fluid flows through the valve body. The pressure difference exceeds a target pressure difference needed to activate the activation component when fluid flow through the valve body is higher than a desired rate, and thereby closes the valve.

  11. Excess and unlike interaction second virial coefficients and excess enthalpy of mixing of (carbon monoxide + pentane)

    SciTech Connect

    McElroy, P.J.; Buchanan, S.

    1995-03-01

    Carbon monoxide and pentane are minor components of natural gas. The excess second virial coefficient of the mixture carbon monoxide + pentane has been determined at 299.5, 313.15, 328.15, and 343.15 K using the pressure change on mixing method. Unlike interaction second virial coefficients were derived and compared with the predictions of the Tsonopoulos correlation. The excess enthalpy of mixing was also estimated.

  12. Characterization of Methane Excess and Absolute Adsorption in Various Clay Nanopores from Molecular Simulation.

    PubMed

    Tian, Yuanyuan; Yan, Changhui; Jin, Zhehui

    2017-09-20

    In this work, we use grand canonical Monte Carlo (GCMC) simulation to study methane adsorption in various clay nanopores and analyze different approaches to characterize the absolute adsorption. As an important constituent of shale, clay minerals can have significant amount of nanopores, which greatly contribute to the gas-in-place in shale. In previous works, absolute adsorption is often calculated from the excess adsorption and bulk liquid phase density of absorbate. We find that methane adsorbed phase density keeps increasing with pressure up to 80 MPa. Even with updated adsorbed phase density from GCMC, there is a significant error in absolute adsorption calculation. Thus, we propose to use the excess adsorption and adsorbed phase volume to calculate absolute adsorption and reduce the discrepancy to less than 3% at high pressure conditions. We also find that the supercritical Dubinin-Radushkevich (SDR) fitting method which is commonly used in experiments to convert the excess adsorption to absolute adsorption may not have a solid physical foundation for methane adsorption. The methane excess and absolute adsorptions per specific surface area are similar for different clay minerals in line with previous experimental data. In mesopores, the excess and absolute adsorptions per specific surface area become insensitive to pore size. Our work should provide important fundamental understandings and insights into accurate estimation of gas-in-place in shale reservoirs.

  13. Excess mortality in Harlem.

    PubMed

    McCord, C; Freeman, H P

    1990-01-18

    In recent decades mortality rates have declined for both white and nonwhite Americans, but national averages obscure the extremely high mortality rates in many inner-city communities. Using data from the 1980 census and from death certificates in 1979, 1980, and 1981, we examined mortality rates in New York City's Central Harlem health district, where 96 percent of the inhabitants are black and 41 percent live below the poverty line. For Harlem, the age-adjusted rate of mortality from all causes was the highest in New York City, more than double that of U.S. whites and 50 percent higher than that of U.S. blacks. Almost all the excess mortality was among those less than 65 years old. With rates for the white population as the basis for comparison, the standardized (adjusted for age) mortality ratios (SMRs) for deaths under the age of 65 in Harlem were 2.91 for male residents and 2.70 for female residents. The highest ratios were for women 25 to 34 years old (SMR, 6.13) and men 35 to 44 years old (SMR, 5.98). The chief causes of this excess mortality were cardiovascular disease (23.5 percent of the excess deaths; SMR, 2.23), cirrhosis (17.9 percent; SMR, 10.5), homicide (14.9 percent; SMR, 14.2), and neoplasms (12.6 percent; SMR, 1.77). Survival analysis showed that black men in Harlem were less likely to reach the age of 65 than men in Bangladesh. Of the 353 health areas in New York, 54 (with a total population of 650,000) had mortality rates for persons under 65 years old that were at lest twice the expected rate. All but one of these areas of high mortality were predominantly black or Hispanic. We conclude that Harlem and probably other inner-city areas with largely black populations have extremely high mortality rates that justify special consideration analogous to that given to natural-disaster areas.

  14. Implementation of DOE/NFDI D&D Cost Estimating Tool (POWERtool) for Initiative Facilities at the Savannah River Site

    SciTech Connect

    Austin, W. E.; WSRC; Baker, S. B. III, Cutshall, C. M.; Crouse, J. L.

    2003-02-26

    The Savannah River Site (SRS) has embarked on an aggressive D&D program to reduce the footprint of excess facilities. Key to the success of this effort is the preparation of accurate cost estimates for decommissioning. SRS traditionally uses ''top-down'' rough order-of-magnitude (ROM) estimating for decommissioning cost estimates. A second cost estimating method (POWERtool) using a ''bottoms-up'' approach has been applied to many of the SRS excess facilities in the T and D-area. This paper describes the use of both estimating methods and compares the estimated costs to actual costs of 5 facilities that were decommissioned in 2002.

  15. Consequences of excess iodine

    PubMed Central

    Leung, Angela M.; Braverman, Lewis E.

    2014-01-01

    Iodine is a micronutrient that is essential for the production of thyroid hormones. The primary source of iodine is the diet via consumption of foods that have been fortified with iodine, including salt, dairy products and bread, or that are naturally abundant in the micronutrient, such as seafood. Recommended daily iodine intake is 150 μg in adults who are not pregnant or lactating. Ingestion of iodine or exposure above this threshold is generally well-tolerated. However, in certain susceptible individuals, including those with pre-existing thyroid disease, the elderly, fetuses and neonates, or patients with other risk factors, the risk of developing iodine-induced thyroid dysfunction might be increased. Hypothyroidism or hyperthyroidism as a result of supraphysiologic iodine exposure might be either subclinical or overt, and the source of the excess iodine might not be readily apparent. PMID:24342882

  16. Spectroscopic analysis of KISO ultraviolet-excess galaxies

    NASA Astrophysics Data System (ADS)

    Maehara, Hideo; Noguchi, Takeshi; Takase, Bunshiro; Handa, Toshihiro

    Spectroscopic properties of 57 ultraviolet-excess galaxies (KUGs), which were selected from the Kiso survey by Takase et al. (1983), are presented. Observational data are low-resolution spectra taken with the Cassegrain image-intensifier spectrograph of the Okayama 188-cm telescope. About 85 percent of this sample exhibit conspicuous emission lines similar to galactic nebulae. The radial velocities of the objects have been obtained from their emission lines as accurate at + or - 90 km/s. The absolute magnitudes estimated from the radial velocities indicate that a wide range exists in blue luminosity of irregular galaxies, and that this sample includes less luminous spiral galaxies. Equivalent widths of emission lines have been measured against the local continuum, and a diagram of the emission line ratio of forbidden O III 5007 A/H-beta versus forbidden N II 6584 A/H-alpha is applied to classify these objects. The diagram suggests that most KUGs are those which have giant H II regions or H II complexes, where bursts of star formation take place on enhanced scales. On the other hand, Seyfert galaxies and other kinds of peculiar galaxies are possibly included as minor members of KUGs. It is shown that the Kiso survey includes far more ultraviolet-excess galaxies of fainter magnitudes than the first Markarian survey.

  17. High-resolution infrared spectroscopy in the 1,200-1,300 cm(-1) region and accurate theoretical estimates for the structure and ring-puckering barrier of perfluorocyclobutane.

    PubMed

    Blake, Thomas A; Glendening, Eric D; Sams, Robert L; Sharpe, Steven W; Xantheas, Sotiris S

    2007-11-08

    We present experimental infrared spectra and theoretical electronic structure results for the geometry, anharmonic vibrational frequencies, and accurate estimates of the magnitude and the origin of the ring-puckering barrier in C4F8. High-resolution (0.0015 cm-1) spectra of the nu12 and nu13 parallel bands of perfluorocyclobutane (c-C4F8) were recorded for the first time by expanding a 10% c-C4F8 in helium mixture in a supersonic jet. Both bands are observed to be rotationally resolved in a jet with a rotational temperature of 15 K. The nu12 mode has b2 symmetry under D2d that correlates to a2u symmetry under D4h and consequently has +/- <-- +/- ring-puckering selection rules. A rigid rotor fit of the nu12 band yields the origin at 1292.56031(2) cm-1 with B' = 0.0354137(3) cm-1 and B' ' = 0.0354363(3) cm-1. The nu13 mode is of b2 symmetry under D2d that correlates to b2g under D4h, and in this case, the ring-puckering selection rules are +/- <-- -/+ . Rotational transitions from the ground and first excited torsional states will be separated by the torsional splitting in the ground and excited vibrational states, and indeed, we observe a splitting of each transition into strong and weak intensity components with a separation of approximately 0.0018 cm-1. The strong and weak sets of transitions were fit separately again using a rigid rotor model to give nu13(strong) = 1240.34858(4) cm-1, B' = 0.0354192(7) cm-1, and B' ' = 0.0354355(7) cm-1 and nu13(weak) = 1240.34674(5) cm-1, B' = 0.0354188(9) cm-1, and B' ' = 0.0354360(7) cm-1. High-level electronic structure calculations at the MP2 and CCSD(T) levels of theory with the family of correlation consistent basis sets of quadruple-zeta quality, developed by Dunning and co-workers, yield best estimates for the vibrationally averaged structural parameters r(C-C) = 1.568 A, r(C-F)alpha = 1.340 A, r(C-F)beta = 1.329 A, alpha(F-C-F) = 110.3 degrees , thetaz(C-C-C) = 89.1 degrees , and delta(C-C-C-C) = 14.6 degrees and

  18. Excess attenuation of an acoustic beam by turbulence.

    PubMed

    Pan, Naixian

    2003-12-01

    A theory based on the concept of a spatial sinusoidal diffraction grating is presented for the estimation of the excess attenuation in an acoustic beam. The equation of the excess attenuation coefficient shows that the excess attenuation of acoustic beam not only depends on the turbulence but also depends on the application parameters such as the beam width, the beam orientation and whether for forward propagation or back scatter propagation. Analysis shows that the excess attenuation appears to have a frequency dependence of cube-root. The expression for the excess attenuation coefficient has been used in the estimations of the temperature structure coefficient, C(T)2, in sodar sounding. The correction of C(T)2 values for excess attenuation reduces their errors greatly. Published profiles of temperature structure coefficient and the velocity structure coefficient in convective conditions are used to test our theory, which is compared with the theory by Brown and Clifford. The excess attenuation due to scattering from turbulence and atmospheric absorption are both taken into account in sodar data processing for deducing the contribution of the lower atmosphere to seeing, which is the sharpness of a telescope image determined by the degree of turbulence in the Earth's atmosphere. The comparison between the contributions of the lowest 300-m layer to seeing with that of the whole atmosphere supports the reasonableness of our estimation of excess attenuation.

  19. The single water-surface sweep estimation method accurately estimates very low (n = 4) to low-moderate (n = 25-100) and high (n > 100) Aedes aegypti (Diptera: Culicidae) pupae numbers in large water containers up to 13 times faster than the exhaustive sweep and total count method and without any sediment contamination.

    PubMed

    Romero-Vivas, C M; Llinás, H; Falconar, A K

    2015-03-01

    To confirm that a single water-surface sweep-net collection coupled with three calibration factors (2.6, 3.0 and 3.5 for 1/3, 2/3 and 3/3 water levels, respectively) (WSCF) could accurately estimate very low to high Aedes aegypti pupae numbers in water containers more rapidly than the exhaustive 5-sweep and total count (ESTC) method recommended by WHO. Both methods were compared in semi-field trials using low (n = 25) to moderate (n = 50-100) pupae numbers in a 250-l drum at 1/3, 2/3 and 3/3 water levels, and by their mean-time determinations using 200 pupae in three 220- to 1024-l water containers at these water levels. Accuracy was further assessed using 69.1% (393/569) of the field-based drums and tanks which contained <100 pupae. The WSCF method accurately estimated total populations in the semi-field trials up to 13.0 times faster than the ESTC method (all P < 0.001); no significant differences (all P-values ≥ 0.05) were obtained between the methods for very low (n = 4) to low-moderate (n = 25-100) and high (n > 100) pupae numbers/container and without sediment disturbance. The simple WSCF method sensitively, accurately and robustly estimated total pupae numbers in their principal breeding sites worldwide, containers with >20 l water volumes, significantly (2.7- to 13.0-fold: all P-values <0.001) faster than the ESTC method for very low to high pupae numbers/container without contaminating the clean water by sediment disturbance which is generated using the WHO-recommended ESTC method. The WSCF method seems ideal for global community-based surveillance and control programmes. © 2014 John Wiley & Sons Ltd.

  20. Molar heat capacity and molar excess enthalpy measurements in aqueous amine solutions

    NASA Astrophysics Data System (ADS)

    Poozesh, Saeed

    Experimental measurements of molar heat capacity and molar excess enthalpy for 1, 4-dimethyl piperazine (1, 4-DMPZ), 1-(2-hydroxyethyl) piperazine (1, 2-HEPZ), I-methyl piperazine (1-MPZ), 3-morpholinopropyl amine (3-MOPA), and 4-(2-hydroxy ethyl) morpholine (4, 2-HEMO) aqueous solutions were carried out in a C80 heat flow calorimeter over a range of temperatures from (298.15 to 353.15) K and for the entire range of the mole fractions. The estimated uncertainty in the measured values of the molar heat capacity and molar excess enthalpy was found to be +/- 2%. Among the five amines studied, 3-MOPA had the highest values of the molar heat capacity and 1-MPZ the lowest. Values of molar heat capacities of amines were dominated by --CH 2, --N, --OH, --O, --NH2 groups and increased with increasing temperature, and contributions of --NH and --CH 3 groups decreased with increasing temperature for these cyclic amines. Molar excess heat capacities were calculated from the measured molar heat capacities and were correlated as a function of the mole fractions employing the Redlich-Kister equation. The molar excess enthalpy values were also correlated as a function of the mole fractions employing the Redlich-Kister equation. Molar enthalpies at infinite dilution were derived. Molar excess enthalpy values were modeled using the solution theory models: NRTL (Non Random Two Liquid) and UNIQUAC (UNIversal QUAsi Chemical) and the modified UNIFAC (UNIversal quasi chemical Functional group Activity Coefficients - Dortmund). The modified UNIFAC was found to be the most accurate and reliable model for the representation and prediction of the molar excess enthalpy values. Among the five amines, the 1-MPZ + water system exhibited the highest values of molar excess enthalpy on the negative side. This study confirmed the conclusion made by Maham et al. (71) that -CH3 group contributed to higher molar excess enthalpies. The negative excess enthalpies were reduced due to the contribution of

  1. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  2. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  3. Consequences of Excessive Educational Planning

    ERIC Educational Resources Information Center

    Benveniste, Guy

    1974-01-01

    Discusses three issues that raise serious questions about the behavioral norms and moral obligations of educational planners. From a cost-benefit point of view, excessive planning is reached when overall societal costs exceed overall societal benefits. (Author/WM)

  4. [Excessive sweating related to hydromorphone].

    PubMed

    Vinit, J; Devilliers, H; Audia, S; Leguy, V; Mura, H; Falvo, N; Berthier, S; Besancenot, J-F; Bonnotte, B; Lorcerie, B

    2009-02-01

    Diffuse and abundant sweating in a middle age patient evolving for several weeks should raise suspicion of malignant lymphoma and infectious or neuroendocrine disorders before considering a drug origin. We report a patient who presented with severe and invalidating excessive sweating related to hydromorphone therapy for vertebral pain. Amongst their many reported side-effects, excessive sweating disappearing with discontinuation of the drug have been reported with some opiates.

  5. Resolution of Genetic Map Expansion Caused by Excess Heterozygosity in Plant Recombinant Inbred Populations

    PubMed Central

    Truong, Sandra K.; McCormick, Ryan F.; Morishige, Daryl T.; Mullet, John E.

    2014-01-01

    Recombinant inbred populations of many plant species exhibit more heterozygosity than expected under the Mendelian model of segregation. This segregation distortion causes the overestimation of recombination frequencies and consequent genetic map expansion. Here we build upon existing genetic models of differential zygotic viability to model a heterozygote fitness term and calculate expected genotypic proportions in recombinant inbred populations propagated by selfing. We implement this model using the existing open-source genetic map construction code base for R/qtl to estimate recombination fractions. Finally, we show that accounting for excess heterozygosity in a sorghum recombinant inbred mapping population shrinks the genetic map by 213 cM (a 13% decrease corresponding to 4.26 fewer recombinations per meiosis). More accurate estimates of linkage benefit linkage-based analyses used in the identification and utilization of causal genetic variation. PMID:25128435

  6. How accurate are sphygmomanometers?

    PubMed

    Mion, D; Pierin, A M

    1998-04-01

    The objective of this study was to assess the accuracy and reliability of mercury and aneroid sphygmomanometers. Measurement of accuracy of calibration and evaluation of physical conditions were carried out in 524 sphygmomanometers, 351 from a hospital setting, and 173 from private medical offices. Mercury sphygmomanometers were considered inaccurate if the meniscus was not '0' at rest. Aneroid sphygmomanometers were tested against a properly calibrated mercury manometer, and were considered calibrated when the error was < or =3 mm Hg. Both types of sphygmomanometers were evaluated for conditions of cuff/bladder, bulb, pump and valve. Of the mercury sphygmomanometers tested 21 % were found to be inaccurate. Of this group, unreliability was noted due to: excessive bouncing (14%), illegibility of the gauge (7%), blockage of the filter (6%), and lack of mercury in the reservoir (3%). Bladder damage was noted in 10% of the hospital devices and in 6% of private medical practices. Rubber aging occurred in 34% and 25%, leaks/holes in 19% and 18%, and leaks in the pump bulb in 16% and 30% of hospital devices and private practice devices, respectively. Of the aneroid sphygmomanometers tested, 44% in the hospital setting and 61% in private medical practices were found to be inaccurate. Of these, the magnitude of inaccuracy was 4-6 mm Hg in 32%, 7-12 mm Hg in 19% and > 13 mm Hg in 7%. In summary, most of the mercury and aneroid sphygmomanometers showed inaccuracy (21% vs 58%) and unreliability (64% vs 70%).

  7. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  8. Origin of Excess 176Hf in Meteorites

    NASA Astrophysics Data System (ADS)

    Thrane, Kristine; Connelly, James N.; Bizzarro, Martin; Meyer, Bradley S.; The, Lih-Sin

    2010-07-01

    After considerable controversy regarding the 176Lu decay constant (λ176Lu), there is now widespread agreement that (1.867 ± 0.008) × 10-11 yr-1 as confirmed by various terrestrial objects and a 4557 Myr meteorite is correct. This leaves the 176Hf excesses that are correlated with Lu/Hf elemental ratios in meteorites older than ~4.56 Ga meteorites unresolved. We attribute 176Hf excess in older meteorites to an accelerated decay of 176Lu caused by excitation of the long-lived 176Lu ground state to a short-lived 176m Lu isomer. The energy needed to cause this transition is ascribed to a post-crystallization spray of cosmic rays accelerated by nearby supernova(e) that occurred after 4564.5 Ma. The majority of these cosmic rays are estimated to penetrate accreted material down to 10-20 m, whereas a small fraction penetrate as deep as 100-200 m, predicting decreased excesses of 176Hf with depth of burial at the time of the irradiation event.

  9. [Mortality attributable to excess weight in Spain].

    PubMed

    Martín-Ramiro, José Javier; Álvarez-Martín, Elena; Gil-Prieto, Ruth

    2014-06-16

    Estimate the mortality attributable to higher than optimal body mass index in the Spanish population in 2006. Excess body weight prevalence data were obtained from the 2006 National Health Survey, while data on associated mortality were extracted from the National Statistic Institute. Population attributable fractions were applied and mortality attributable to higher than optimal body mass index was calculated for people between 35 and 79 years. In 2006, among the Spanish population aged 35-79 years, 25,671 lives (16,405 males and 9,266 women) were lost due to higher than optimal body mass index. Mortality attributable was 15.8% of total deaths in males and 14.8% in women, but if we refer to those causes where excess body weight is a risk factor, it is about a 30% of mortality (31.6% in men and 28% in women). The most important individual cause was cardiovascular disease (58%), followed by cancer. The individual cause with a major contribution to deaths was type 2 diabetes; nearly 70% in males and 80% in women. Overweight accounted for 54.9% deaths in men and 48.6% in women. Excess body weight is a major public health problem, with an important associated mortality. Attributable deaths are a useful tool to know the real situation and to monitor for disease control interventions. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  10. 24 CFR 236.60 - Excess Income.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Section 236 interest reduction payments may apply to retain Excess Income for project use unless the...) The proposed use of the requested Excess Income. (d) Retention of Excess Income for non-project use—(1... to retain Excess Income for non-project use unless the mortgagor owes prior Excess Income and is not...

  11. 24 CFR 236.60 - Excess Income.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Section 236 interest reduction payments may apply to retain Excess Income for project use unless the...) The proposed use of the requested Excess Income. (d) Retention of Excess Income for non-project use—(1... to retain Excess Income for non-project use unless the mortgagor owes prior Excess Income and is not...

  12. 24 CFR 236.60 - Excess income.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Section 236 interest reduction payments may apply to retain Excess Income for project use unless the...) The proposed use of the requested Excess Income. (d) Retention of Excess Income for non-project use—(1... to retain Excess Income for non-project use unless the mortgagor owes prior Excess Income and is not...

  13. 24 CFR 236.60 - Excess Income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 236 interest reduction payments may apply to retain Excess Income for project use unless the...) The proposed use of the requested Excess Income. (d) Retention of Excess Income for non-project use—(1... to retain Excess Income for non-project use unless the mortgagor owes prior Excess Income and is not...

  14. Deuterium excess in precipitation of Alpine regions - moisture recycling.

    PubMed

    Froehlich, Klaus; Kralik, Martin; Papesch, Wolfgang; Rank, Dieter; Scheifinger, Helfried; Stichler, Willibald

    2008-03-01

    The paper evaluates long-term seasonal variations of the deuterium excess (d-excess = delta(2)H - 8. delta(18)O) in precipitation of stations located north and south of the main ridge of the Austrian Alps. It demonstrates that sub-cloud evaporation during precipitation and continental moisture recycling are local, respectively, regional processes controlling these variations. In general, sub-cloud evaporation decreases and moisture recycling increases the d-excess. Therefore, evaluation of d-excess variations in terms of moisture recycling, the main aim of this paper, includes determination of the effect of sub-cloud evaporation. Since sub-cloud evaporation is governed by saturation deficit and distance between cloud base and the ground, its effect on the d-excess is expected to be lower at mountain than at lowland/valley stations. To determine quantitatively this difference, we examined long-term seasonal d-excess variations measured at three selected mountain and adjoining valley stations. The altitude differences between mountain and valley stations ranged from 470 to 1665 m. Adapting the 'falling water drop' model by Stewart [J. Geophys. Res., 80(9), 1133-1146 (1975).], we estimated that the long-term average of sub-cloud evaporation at the selected mountain stations (altitudes between about 1600 and 2250 m.a.s.l.) is less than 1 % of the precipitation and causes a decrease of the d-excess of less than 2 per thousand. For the selected valley stations, the corresponding evaporated fraction is at maximum 7 % and the difference in d-excess ranges up to 8 per thousand. The estimated d-excess differences have been used to correct the measured long-term d-excess values at the selected stations. Finally, the corresponding fraction of water vapour has been estimated that recycled by evaporation of surface water including soil water from the ground. For the two mountain stations Patscherkofel and Feuerkogel, which are located north of the main ridge of the Alps, the

  15. Syndromes that Mimic an Excess of Mineralocorticoids.

    PubMed

    Sabbadin, Chiara; Armanini, Decio

    2016-09-01

    Pseudohyperaldosteronism is characterized by a clinical picture of hyperaldosteronism with suppression of renin and aldosterone. It can be due to endogenous or exogenous substances that mimic the effector mechanisms of aldosterone, leading not only to alterations of electrolytes and hypertension, but also to an increased inflammatory reaction in several tissues. Enzymatic defects of adrenal steroidogenesis (deficiency of 17α-hydroxylase and 11β-hydroxylase), mutations of mineralocorticoid receptor (MR) and alterations of expression or saturation of 11-hydroxysteroid dehydrogenase type 2 (apparent mineralocorticoid excess syndrome, Cushing's syndrome, excessive intake of licorice, grapefruits or carbenoxolone) are the main causes of pseudohyperaldosteronism. In these cases treatment with dexamethasone and/or MR-blockers is useful not only to normalize blood pressure and electrolytes, but also to prevent the deleterious effects of prolonged over-activation of MR in epithelial and non-epithelial tissues. Genetic alterations of the sodium channel (Liddle's syndrome) or of the sodium-chloride co-transporter (Gordon's syndrome) cause abnormal sodium and water reabsorption in the distal renal tubules and hypertension. Treatment with amiloride and thiazide diuretics can respectively reverse the clinical picture and the renin aldosterone system. Finally, many other more common situations can lead to an acquired pseudohyperaldosteronism, like the expansion of volume due to exaggerated water and/or sodium intake, and the use of drugs, as contraceptives, corticosteroids, β-adrenergic agonists and FANS. In conclusion, syndromes or situations that mimic aldosterone excess are not rare and an accurate personal and pharmacological history is mandatory for a correct diagnosis and avoiding unnecessary tests and mistreatments.

  16. Excessive or unwanted hair in women

    MedlinePlus

    Hypertrichosis; Hirsutism; Hair - excessive (women); Excessive hair in women; Hair - women - excessive or unwanted ... Women normally produce low levels of male hormones (androgens). If your body makes too much of this ...

  17. Accurate spectral color measurements

    NASA Astrophysics Data System (ADS)

    Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.

    1999-08-01

    Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.

  18. Excessive masturbation after epilepsy surgery.

    PubMed

    Ozmen, Mine; Erdogan, Ayten; Duvenci, Sirin; Ozyurt, Emin; Ozkara, Cigdem

    2004-02-01

    Sexual behavior changes as well as depression, anxiety, and organic mood/personality disorders have been reported in temporal lobe epilepsy (TLE) patients before and after epilepsy surgery. The authors describe a 14-year-old girl with symptoms of excessive masturbation in inappropriate places, social withdrawal, irritability, aggressive behavior, and crying spells after selective amygdalohippocampectomy for medically intractable TLE with hippocampal sclerosis. Since the family members felt extremely embarrassed, they were upset and angry with the patient which, in turn, increased her depressive symptoms. Both her excessive masturbation behavior and depressive symptoms remitted within 2 months of psychoeducative intervention and treatment with citalopram 20mg/day. Excessive masturbation is proposed to be related to the psychosocial changes due to seizure-free status after surgery as well as other possible mechanisms such as Kluver-Bucy syndrome features and neurophysiologic changes associated with the cessation of epileptic discharges. This case demonstrates that psychiatric problems and sexual changes encountered after epilepsy surgery are possibly multifactorial and in adolescence hypersexuality may be manifested as excessive masturbation behavior.

  19. Outflows in Sodium Excess Objects

    NASA Astrophysics Data System (ADS)

    Park, Jongwon; Jeong, Hyunjin; Yi, Sukyoung K.

    2015-08-01

    Van Dokkum and Conroy revisited the unexpectedly strong Na i lines at 8200 Å found in some giant elliptical galaxies and interpreted them as evidence for an unusually bottom-heavy initial mass function. Jeong et al. later found a large population of galaxies showing equally extraordinary Na D doublet absorption lines at 5900 Å (Na D excess objects: NEOs) and showed that their origins can be different for different types of galaxies. While a Na D excess seems to be related to the interstellar medium (ISM) in late-type galaxies, smooth-looking early-type NEOs show little or no dust extinction and hence no compelling signs of ISM contributions. To further test this finding, we measured the Doppler components in the Na D lines. We hypothesized that the ISM would have a better (albeit not definite) chance of showing a blueshift Doppler departure from the bulk of the stellar population due to outflow caused by either star formation or AGN activities. Many of the late-type NEOs clearly show blueshift in their Na D lines, which is consistent with the former interpretation that the Na D excess found in them is related to gas outflow caused by star formation. On the contrary, smooth-looking early-type NEOs do not show any notable Doppler components, which is also consistent with the interpretation of Jeong et al. that the Na D excess in early-type NEOs is likely not related to ISM activities but is purely stellar in origin.

  20. Light and Excess Manganese1

    PubMed Central

    González, Alonso; Steffen, Kenneth L.; Lynch, Jonathan P.

    1998-01-01

    The effect of light intensity on antioxidants, antioxidant enzymes, and chlorophyll content was studied in common bean (Phaseolus vulgaris L.) exposed to excess Mn. Leaves of bean genotypes contrasting in Mn tolerance were exposed to two different light intensities and to excess Mn; light was controlled by shading a leaflet with filter paper. After 5 d of Mn treatment ascorbate was depleted by 45% in leaves of the Mn-sensitive genotype ZPV-292 and by 20% in the Mn-tolerant genotype CALIMA. Nonprotein sulfhydryl groups and glutathione reductase were not affected by Mn or light treatment. Ten days of Mn-toxicity stress increased leaf ascorbate peroxidase activity of cv ZPV-292 by 78% in low light and by 235% in high light, and superoxide dismutase activity followed a similar trend. Increases of ascorbate peroxidase and superoxide dismutase activity observed in cv CALIMA were lower than those observed in the susceptible cv ZPV-292. The cv CALIMA had less ascorbate oxidation under excess Mn-toxicity stress. Depletion of ascorbate occurred before the onset of chlorosis in Mn-stressed plants, especially in cv ZPV-292. Lipid peroxidation was not detected in floating leaf discs of mature leaves exposed to excess Mn. Our results suggest that Mn toxicity may be mediated by oxidative stress, and that the tolerant genotype may maintain higher ascorbate levels under stress than the sensitive genotype. PMID:9765534

  1. OUTFLOWS IN SODIUM EXCESS OBJECTS

    SciTech Connect

    Park, Jongwon; Yi, Sukyoung K.; Jeong, Hyunjin

    2015-08-10

    Van Dokkum and Conroy revisited the unexpectedly strong Na i lines at 8200 Å found in some giant elliptical galaxies and interpreted them as evidence for an unusually bottom-heavy initial mass function. Jeong et al. later found a large population of galaxies showing equally extraordinary Na D doublet absorption lines at 5900 Å (Na D excess objects: NEOs) and showed that their origins can be different for different types of galaxies. While a Na D excess seems to be related to the interstellar medium (ISM) in late-type galaxies, smooth-looking early-type NEOs show little or no dust extinction and hence no compelling signs of ISM contributions. To further test this finding, we measured the Doppler components in the Na D lines. We hypothesized that the ISM would have a better (albeit not definite) chance of showing a blueshift Doppler departure from the bulk of the stellar population due to outflow caused by either star formation or AGN activities. Many of the late-type NEOs clearly show blueshift in their Na D lines, which is consistent with the former interpretation that the Na D excess found in them is related to gas outflow caused by star formation. On the contrary, smooth-looking early-type NEOs do not show any notable Doppler components, which is also consistent with the interpretation of Jeong et al. that the Na D excess in early-type NEOs is likely not related to ISM activities but is purely stellar in origin.

  2. EVALUATING EXCESS DIETARY EXPOSURE OF YOUNG CHILDREN EATING IN CONTAMINATED ENVIRONMENTS

    EPA Science Inventory

    The United States' Food Quality Protection Act of 1996 requires more accurate assessment of children's aggregate exposures to environmental contaminants. Since children have unstructured eating behaviors, their excess exposures, caused by eating activities, becomes an importan...

  3. EVALUATING EXCESS DIETARY EXPOSURE OF YOUNG CHILDREN EATING IN CONTAMINATED ENVIRONMENTS

    EPA Science Inventory

    The United States' Food Quality Protection Act of 1996 requires more accurate assessment of children's aggregate exposures to environmental contaminants. Since children have unstructured eating behaviors, their excess exposures, caused by eating activities, becomes an importan...

  4. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  5. Severe rhabdomyolysis after excessive bodybuilding.

    PubMed

    Finsterer, J; Zuntner, G; Fuchs, M; Weinberger, A

    2007-12-01

    A 46-year-old male subject performed excessive physical exertion during 4-6 h in a studio for body builders during 5 days. He was not practicing sport prior to this training and denied the use of any aiding substances. Despite muscle aching already after 1 day, he continued the exercises. After the last day, he recognized tiredness and cessation of urine production. Two days after discontinuation of the training, a Herpes simplex infection occurred. Because of acute renal failure, he required hemodialysis. There were absent tendon reflexes and creatine kinase (CK) values up to 208 274 U/L (normal: <170 U/L). After 2 weeks, CK had almost normalized and, after 4 weeks, hemodialysis was discontinued. Excessive muscle training may result in severe, hemodialysis-dependent rhabdomyolysis. Triggering factors may be prior low fitness level, viral infection, or subclinical metabolic myopathy.

  6. The Cosmic Ray Electron Excess

    NASA Technical Reports Server (NTRS)

    Chang, J.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Christl, M.; Ganel, O.; Guzik, T. G.; Isbert, J.; Kim, K. C.; Kuznetsov, E. N.; hide

    2008-01-01

    This slide presentation reviews the possible sources for the apparent excess of Cosmic Ray Electrons. The presentation reviews the Advanced Thin Ionization Calorimeter (ATIC) instrument, the various parts, how cosmic ray electrons are measured, and shows graphs that review the results of the ATIC instrument measurement. A review of Cosmic Ray Electrons models is explored, along with the source candidates. Scenarios for the excess are reviewed: Supernova remnants (SNR) Pulsar Wind nebulae, or Microquasars. Each of these has some problem that mitigates the argument. The last possibility discussed is Dark Matter. The Anti-Matter Exploration and Light-nuclei Astrophysics (PAMELA) mission is to search for evidence of annihilations of dark matter particles, to search for anti-nuclei, to test cosmic-ray propagation models, and to measure electron and positron spectra. There are slides explaining the results of Pamela and how to compare these with those of the ATIC experiment. Dark matter annihilation is then reviewed, which represent two types of dark matter: Neutralinos, and kaluza-Kline (KK) particles, which are next explained. The future astrophysical measurements, those from GLAST LAT, the Alpha Magnetic Spectrometer (AMS), and HEPCAT are reviewed, in light of assisting in finding an explanation for the observed excess. Also the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) could help by revealing if there are extra dimensions.

  7. Excess carbon in silicon carbide

    NASA Astrophysics Data System (ADS)

    Shen, X.; Oxley, M. P.; Puzyrev, Y.; Tuttle, B. R.; Duscher, G.; Pantelides, S. T.

    2010-12-01

    The application of SiC in electronic devices is currently hindered by low carrier mobility at the SiC/SiO2 interfaces. Recently, it was reported that 4H-SiC/SiO2 interfaces might have a transition layer on the SiC substrate side with C/Si ratio as high as 1.2, suggesting that carbon is injected into the SiC substrate during oxidation or other processing steps. We report finite-temperature quantum molecular dynamics simulations that explore the behavior of excess carbon in SiC. For SiC with 20% excess carbon, we find that, over short time (˜24 ps), carbon atoms bond to each other and form various complexes, while the silicon lattice is largely unperturbed. These results, however, suggest that at macroscopic times scale, C segregation is likely to occur; therefore a transition layer with 20% extra carbon would not be stable. For a dilute distribution of excess carbon, we explore the pairing of carbon interstitials and show that the formation of dicarbon interstitial cluster is kinetically very favorable, which suggests that isolated carbon clusters may exist inside SiC substrate.

  8. Verification of excess defense material

    SciTech Connect

    Fearey, B.L.; Pilat, J.F.; Eccleston, G.W.; Nicholas, N.J.; Tape, J.W.

    1997-12-01

    The international community in the post-Cold War period has expressed an interest in the International Atomic Energy Agency (IAEA) using its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring excess materials, which include both classified and unclassified materials. Although the IAEA has suggested the need to address inspections of both types of materials, the most troublesome and potentially difficult problems involve approaches to the inspection of classified materials. The key issue for placing classified nuclear components and materials under IAEA safeguards is the conflict between these traditional IAEA materials accounting procedures and the US classification laws and nonproliferation policy designed to prevent the disclosure of critical weapon-design information. Possible verification approaches to classified excess defense materials could be based on item accountancy, attributes measurements, and containment and surveillance. Such approaches are not wholly new; in fact, they are quite well established for certain unclassified materials. Such concepts may be applicable to classified items, but the precise approaches have yet to be identified, fully tested, or evaluated for technical and political feasibility, or for their possible acceptability in an international inspection regime. Substantial work remains in these areas. This paper examines many of the challenges presented by international inspections of classified materials.

  9. Diphoton excess through dark mediators

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Yi; Lefebvre, Michel; Pospelov, Maxim; Zhong, Yi-Ming

    2016-07-01

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated e + e - pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: gg → S → A ' A ' → ( e + e -)( e + e -) and qoverline{q}to {Z}^'to sato ({e}+{e}-)({e}+{e}-) , where at the first step a heavy scalar, S, or vector, Z ', resonances are produced that decay to light metastable vectors, A ', or (pseudo-)scalars, s and a. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heavy resonances in order to derive the expected lifetimes and couplings of metastable light resonances. We observe that in the case of A ', the suggested range of masses and mixing angles ɛ is within reach of several new-generation intensity frontier experiments.

  10. Diphoton excess through dark mediators

    DOE PAGES

    Chen, Chien -Yi; Lefebvre, Michel; Pospelov, Maxim; ...

    2016-07-12

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated e+e– pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: gg → S → A'A'more » → (e+e–)(e+e–) and qq¯→ Z' → sa → (e+e–) (e+e–), where at the first step a heavy scalar, S, or vector, Z', resonances are produced that decay to light metastable vectors, A', or (pseudo-)scalars, s and a. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heavy resonances in order to derive the expected lifetimes and couplings of metastable light resonances. In conclusion, we observe that in the case of A', the suggested range of masses and mixing angles ϵ is within reach of several new-generation intensity frontier experiments.« less

  11. Diphoton excess through dark mediators

    SciTech Connect

    Chen, Chien -Yi; Lefebvre, Michel; Pospelov, Maxim; Zhong, Yi -Ming

    2016-07-12

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated e+e pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: gg → S → A'A' → (e+e)(e+e) and qq¯→ Z' → sa → (e+e) (e+e), where at the first step a heavy scalar, S, or vector, Z', resonances are produced that decay to light metastable vectors, A', or (pseudo-)scalars, s and a. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heavy resonances in order to derive the expected lifetimes and couplings of metastable light resonances. In conclusion, we observe that in the case of A', the suggested range of masses and mixing angles ϵ is within reach of several new-generation intensity frontier experiments.

  12. Outflows in Sodium Excess Objects

    NASA Astrophysics Data System (ADS)

    Park, Jongwon; Jeong, Hyunjin; Yi, Sukyoung

    2016-01-01

    van Dokkum and Conroy reported that some giant elliptical galaxies show extraordinarily strong Na I absorption lines and suggested that this is the evidence of unusually bottom-heavy initial mass function. Jeong et al. later studied galaxies with unexpectedly strong Na D absorption lines (Na D excess objects: NEOs) and showed that the origins of NEOs are different for different types of galaxies. According to their study, the origin of Na D excess seems to be related to interstellar medium (ISM) in late-type galaxies, but there seems to be no contributions from ISM in smooth-looking early-type galaxies. In order to test this finding, we measured the Doppler components in Na D lines of NEOs. We hypothesized that if Na D absorption line is related to ISM, the absorption line is more likely to be blueshifted in the spectrum by the motion of ISM caused by outflow. Many of late-type NEOs show blueshifted Na D absorption lines, so their origin seems related to ISM. On the other hand, smooth-looking early-type NEOs do not show Doppler departure and Na D excess in early-type NEOs is likely not related to ISM, which is consistent with the finding of Jeong et al.

  13. The Cosmic Ray Electron Excess

    NASA Technical Reports Server (NTRS)

    Chang, J.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Christl, M.; Ganel, O.; Guzik, T. G.; Isbert, J.; Kim, K. C.; Kuznetsov, E. N.; Panasyuk, M. I.; Panov, A. D.; Schmidt, W. K. H.; Seo, E. S.; Sokolskaya, N. V.; Watts, J. W.; Wefel, J. P.; Wu, J.; Zatsepin, V. I.

    2008-01-01

    This slide presentation reviews the possible sources for the apparent excess of Cosmic Ray Electrons. The presentation reviews the Advanced Thin Ionization Calorimeter (ATIC) instrument, the various parts, how cosmic ray electrons are measured, and shows graphs that review the results of the ATIC instrument measurement. A review of Cosmic Ray Electrons models is explored, along with the source candidates. Scenarios for the excess are reviewed: Supernova remnants (SNR) Pulsar Wind nebulae, or Microquasars. Each of these has some problem that mitigates the argument. The last possibility discussed is Dark Matter. The Anti-Matter Exploration and Light-nuclei Astrophysics (PAMELA) mission is to search for evidence of annihilations of dark matter particles, to search for anti-nuclei, to test cosmic-ray propagation models, and to measure electron and positron spectra. There are slides explaining the results of Pamela and how to compare these with those of the ATIC experiment. Dark matter annihilation is then reviewed, which represent two types of dark matter: Neutralinos, and kaluza-Kline (KK) particles, which are next explained. The future astrophysical measurements, those from GLAST LAT, the Alpha Magnetic Spectrometer (AMS), and HEPCAT are reviewed, in light of assisting in finding an explanation for the observed excess. Also the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) could help by revealing if there are extra dimensions.

  14. Search for 41K Excess in Efremovka CAIs

    NASA Astrophysics Data System (ADS)

    Srinivasan, G.; Ulyanov, A. A.; Goswami, J. N.

    1993-07-01

    We have used the ion microprobe to measure K isotopic composition of refractory phases in Efremovka CAIs to look for the possible presence of K excess from the decay of extinct radionuclide Ca (halflife = 0.13 Ma). The presence of Ca at the time of CAI formation, if established, will allow us to place a lower limit on the time interval between the last injection of freshly synthesized matter into the solar nebula and the formation of some of the first solid objects (CAIs) in the solar system. Several attempts have been made earlier to detect 41K excess in Allende CAIs [1-4]. We have further investigated this problem by analyzing the Efremovka CAIs for two reasons. First, both the petrographic and magnesium isotopic systematics suggest the Efremovka CAIs to be less altered compared to the Allende CAIs making them an ideal and perhaps better sample for this study. Second, the presence of large perovskite (~10 micrometers) allowed us to analyse this phase, which was not included in earlier studies. The major difficulty in accurately measuring 41K, which was identified in earlier studies, is the unresolvable (40Ca42Ca)++ interference, which was found to be matrix dependent [4]. In addition, one can also have interfernce from the (40CaH)+ peak. In our operating condition the interference from the hydride peak can be neglected (Fig. 1, which appears in the hard copy). We have analyzed terrestrial perovskite (K <= 20 ppm) to determine the (40Ca42Ca)++ correction term, and its equivalence with (40Ca43Ca)++ ion signal at mass 41.5 [4]. In perovskite, the (40Ca42Ca)++ signal constitutes ~80% of the signal at 41K and we could estimate this interference with confidence. A value of (2.7 +- 0.1) x 10^-5 was obtained for the ratio [(40Ca42Ca)++/42Ca+], which is similar to the measured [(40Ca43Ca)++/43Ca+] ratio of (2.4 +- 0.2) x 10^-5. We have therefore used the measured value for the latter ratio in the analyzed phases to correct for the doubly charged interference at mass 41

  15. 10 CFR 904.9 - Excess capacity.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess capacity. 904.9 Section 904.9 Energy DEPARTMENT OF... Marketing § 904.9 Excess capacity. (a) If the Uprating Program results in Excess Capacity, Western shall be entitled to such Excess Capacity to integrate the operation of the Boulder City Area Projects and...

  16. 12 CFR 925.23 - Excess stock.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Excess stock. 925.23 Section 925.23 Banks and... BANKS Stock Requirements § 925.23 Excess stock. (a) Sale of excess stock. Subject to the restriction in paragraph (b) of this section, a member may purchase excess stock as long as the purchase is approved by...

  17. 34 CFR 300.16 - Excess costs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 2 2011-07-01 2010-07-01 true Excess costs. 300.16 Section 300.16 Education... DISABILITIES General Definitions Used in This Part § 300.16 Excess costs. Excess costs means those costs that... for an example of how excess costs must be calculated.) (Authority: 20 U.S.C. 1401(8))...

  18. 34 CFR 300.16 - Excess costs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Excess costs. 300.16 Section 300.16 Education... DISABILITIES General Definitions Used in This Part § 300.16 Excess costs. Excess costs means those costs that... for an example of how excess costs must be calculated.) (Authority: 20 U.S.C. 1401(8))...

  19. Estimating potential evapotranspiration with improved radiation estimation

    USDA-ARS?s Scientific Manuscript database

    Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...

  20. [Disability attributable to excess weight in Spain].

    PubMed

    Martín-Ramiro, José Javier; Alvarez-Martín, Elena; Gil-Prieto, Ruth

    2014-08-19

    To estimate the disability attributable to higher than optimal body mass index in the Spanish population in 2006. Excess body weight prevalence data were obtained from the 2006 National Health Survey (NHS), while the prevalence of associated morbidities was extracted from the 2006 NHS and from a national hospital data base. Population attributable fractions were applied and disability attributable was expressed as years life with disability (YLD). In 2006, in the Spanish population aged 35-79 years, 791.650 YLD were lost due to higher than optimal body mass index (46.7% in males and 53.3% in females). Overweight (body mass index 25-29.9) accounted for 45.7% of total YLD. Males YLD were higher than females under 60. The 35-39 quinquennial group showed a difference for males of 16.6% while in the 74-79 group the difference was 23.8% for women. Osteoarthritis and chronic back pain accounted for 60% of YLD while hypertensive disease and type 2 diabetes mellitus were responsible of 37%. Excess body weight is a health risk related to the development of various diseases with an important associated disability burden and social and economical cost. YLD analysis is a useful monitor tool for disease control interventions. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  1. Arm Span and Ulnar Length Are Reliable and Accurate Estimates of Recumbent Length and Height in a Multiethnic Population of Infants and Children under 6 Years of Age123

    PubMed Central

    Forman, Michele R.; Zhu, Yeyi; Hernandez, Ladia M.; Himes, John H.; Dong, Yongquan; Danish, Robert K.; James, Kyla E.; Caulfield, Laura E.; Kerver, Jean M.; Arab, Lenore; Voss, Paula; Hale, Daniel E.; Kanafani, Nadim; Hirschfeld, Steven

    2014-01-01

    Surrogate measures are needed when recumbent length or height is unobtainable or unreliable. Arm span has been used as a surrogate but is not feasible in children with shoulder or arm contractures. Ulnar length is not usually impaired by joint deformities, yet its utility as a surrogate has not been adequately studied. In this cross-sectional study, we aimed to examine the accuracy and reliability of ulnar length measured by different tools as a surrogate measure of recumbent length and height. Anthropometrics [recumbent length, height, arm span, and ulnar length by caliper (ULC), ruler (ULR), and grid (ULG)] were measured in 1479 healthy infants and children aged <6 y across 8 study centers in the United States. Multivariate mixed-effects linear regression models for recumbent length and height were developed by using ulnar length and arm span as surrogate measures. The agreement between the measured length or height and the predicted values by ULC, ULR, ULG, and arm span were examined by Bland-Altman plots. All 3 measures of ulnar length and arm span were highly correlated with length and height. The degree of precision of prediction equations for length by ULC, ULR, and ULG (R2 = 0.95, 0.95, and 0.92, respectively) was comparable with that by arm span (R2 = 0.97) using age, sex, and ethnicity as covariates; however, height prediction by ULC (R2 = 0.87), ULR (R2 = 0.85), and ULG (R2 = 0.88) was less comparable with arm span (R2 = 0.94). Our study demonstrates that arm span and ULC, ULR, or ULG can serve as accurate and reliable surrogate measures of recumbent length and height in healthy children; however, ULC, ULR, and ULG tend to slightly overestimate length and height in young infants and children. Further testing of ulnar length as a surrogate is warranted in physically impaired or nonambulatory children. PMID:25031329

  2. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: Evaluating the surface energy budget in a Regional Climate Model with automatic weather station observations

    NASA Astrophysics Data System (ADS)

    Steffensen Schmidt, Louise; Aðalgeirsdóttir, Guðfinna; Guðmundsson, Sverrir; Langen, Peter L.; Pálsson, Finnur; Mottram, Ruth; Gascoin, Simon; Björnsson, Helgi

    2017-04-01

    The evolution of the surface mass balance of Vatnajökull ice cap, Iceland, from 1981 to the present day is estimated by using the Regional Climate Model HIRHAM5 to simulate the surface climate. A new albedo parametrization is used for the simulation, which describes the albedo with an exponential decay with time. In addition, it utilizes a new background map of the ice albedo created from MODIS data. The simulation is validated against observed daily values of weather parameters from five Automatic Weather Stations (AWSs) from 2001-2014, as well as mass balance measurements from 1995-2014. The modelled albedo is overestimated at the AWS sites in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and the model not accounting for dust and ash deposition during dust storms and volcanic eruptions. A comparison with the specific summer, winter, and annual mass balance for all Vatnajökull from 1995-2014 shows a good overall fit during the summer, with the model underestimating the balance by only 0.04 m w. eq. on average. The winter balance, on the other hand, is overestimated by 0.5 m w. eq. on average, mostly due to an overestimation of the precipitation at the highest areas of the ice cap. A simple correction of the accumulation at these points reduced the error to 0.15 m w. eq. The model captures the evolution of the specific mass balance well, for example it captures an observed shift in the balance in the mid-1990s, which gives us confidence in the results for the entire model run. Our results show the importance of bare ice albedo for modelled mass balance and that processes not currently accounted for in RCMs, such as dust storms, are an important source of uncertainty in estimates of the snow melt rate.

  3. Excess deferred taxes: an update

    SciTech Connect

    Howe, S.

    1985-04-04

    The states originally split on whether to accelerate refunds to customers for overpaid taxes resulting from the decrease in corporate income taxes, but recent regulatory decisions favor a quick payback of excess deferred taxes. The Internal Revenue Service (IRS) indicates that this may violate normalization rules for accounting and threaten the utility's eligibility for accelerated depreciation deductions. After reviewing the positions of the IRS, state commissions, and the courts, the author concludes that the debate will continue until the Treasury Department issues definitive regulations. 1 table.

  4. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    NASA Astrophysics Data System (ADS)

    Steffensen Schmidt, Louise; Aðalgeirsdóttir, Guðfinna; Guðmundsson, Sverrir; Langen, Peter L.; Pálsson, Finnur; Mottram, Ruth; Gascoin, Simon; Björnsson, Helgi

    2017-07-01

    A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980-2014, is used to estimate the evolution of the glacier surface mass balance (SMB). This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs) from the period 2001-2014, as well as in situ SMB measurements from the period 1995-2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995-2014) shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981-2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes currently not accounted for in RCMs

  5. Excess costs of social anxiety disorder in Germany.

    PubMed

    Dams, Judith; König, Hans-Helmut; Bleibler, Florian; Hoyer, Jürgen; Wiltink, Jörg; Beutel, Manfred E; Salzer, Simone; Herpertz, Stephan; Willutzki, Ulrike; Strauß, Bernhard; Leibing, Eric; Leichsenring, Falk; Konnopka, Alexander

    2017-04-15

    Social anxiety disorder is one of the most frequent mental disorders. It is often associated with mental comorbidities and causes a high economic burden. The aim of our analysis was to estimate the excess costs of patients with social anxiety disorder compared to persons without anxiety disorder in Germany. Excess costs of social anxiety disorder were determined by comparing two data sets. Patient data came from the SOPHO-NET study A1 (n=495), whereas data of persons without anxiety disorder originated from a representative phone survey (n=3213) of the general German population. Missing data were handled by "Multiple Imputation by Chained Equations". Both data sets were matched using "Entropy Balancing". Excess costs were calculated from a societal perspective for the year 2014 using general linear regression with a gamma distribution and log-link function. Analyses considered direct costs (in- and outpatient treatment, rehabilitation, and professional and informal care) and indirect costs due to absenteeism from work. Total six-month excess costs amounted to 451€ (95% CI: 199€-703€). Excess costs were mainly caused by indirect excess costs due to absenteeism from work of 317€ (95% CI: 172€-461€), whereas direct excess costs amounted to 134€ (95% CI: 110€-159€). Costs for medication, unemployment and disability pension was not evaluated. Social anxiety disorder was associated with statistically significant excess costs, in particular due to indirect costs. As patients in general are often unaware of their disorder or its severity, awareness should be strengthened. Prevention and early treatment might reduce long-term indirect costs. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The effects of excessive humidity.

    PubMed

    Williams, R B

    1998-06-01

    Humidification devices and techniques can expose the airway mucosa to a wide range of gas temperatures and humidities, some of which are excessive and may cause injury. Humidified gas is a carrier of both water and energy. The volume of water in the gas stream depends on whether the water is in a molecular form (vapor), particulate form (aerosol), or bulk form (liquid). The energy content of gas stream is the sum of the sensible heat (temperature) of the air and any water droplets in it and the heat of vaporization (latent energy) of any water vapor present. Latent heat energy is much larger than sensible heat energy, so saturated air contains much more energy than dry air. Thus every breath contains a water volume and energy (thermal) challenge to the airway mucosa. When the challenge exceeds the homeostatic mechanisms airway dysfunction begins, starting at the cellular and secretion level and progressing to whole airway function. A large challenge will result in quick progression of dysfunction. Early dysfunction is generally reversible, however, so large challenges with short exposure times may not cause irreversible injury. The mechanisms of airway injury owing to excess water are not well studied. The observation of its effects lends itself to some general conclusions, however. Alterations in the ventilation-perfusion ratio, decrease in vital capacity and compilance, and atelectasis are suggestive of partial or full occlusion of small airways. Changes in surface tension and alveolar-arterial oxygen gradient are consistent with flooding of alveoli. There also may be osmotic challenges to mucosal cell function as evidenced by the different reaction rates with hyper- and hypotonic saline. The reaction to nonisotonic saline also may partly explain increases in specific airway resistance. Aerosolized water and instilled water may be hazardous because of their demonstrated potential for delivering excessive water to the airway. Their use for airway humidification or

  7. 10 CFR 904.10 - Excess energy.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  8. 10 CFR 904.10 - Excess energy.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  9. 10 CFR 904.10 - Excess energy.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  10. 10 CFR 904.10 - Excess energy.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  11. 10 CFR 904.10 - Excess energy.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  12. 7 CFR 985.56 - Excess oil.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified as...

  13. 7 CFR 985.56 - Excess oil.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified as...

  14. 7 CFR 985.56 - Excess oil.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified as...

  15. 7 CFR 985.56 - Excess oil.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified as...

  16. 7 CFR 985.56 - Excess oil.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified as...

  17. 43 CFR 426.12 - Excess land.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 1 2012-10-01 2011-10-01 true Excess land. 426.12 Section 426.12 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION, DEPARTMENT OF THE INTERIOR ACREAGE LIMITATION RULES AND REGULATIONS § 426.12 Excess land. (a) The process of designating excess and...

  18. 43 CFR 426.12 - Excess land.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 1 2013-10-01 2013-10-01 false Excess land. 426.12 Section 426.12 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION, DEPARTMENT OF THE INTERIOR ACREAGE LIMITATION RULES AND REGULATIONS § 426.12 Excess land. (a) The process of designating excess and...

  19. 43 CFR 426.12 - Excess land.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Excess land. 426.12 Section 426.12 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION, DEPARTMENT OF THE INTERIOR ACREAGE LIMITATION RULES AND REGULATIONS § 426.12 Excess land. (a) The process of designating excess and...

  20. 43 CFR 426.12 - Excess land.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 1 2014-10-01 2014-10-01 false Excess land. 426.12 Section 426.12 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION, DEPARTMENT OF THE INTERIOR ACREAGE LIMITATION RULES AND REGULATIONS § 426.12 Excess land. (a) The process of designating excess and...

  1. 43 CFR 426.12 - Excess land.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 1 2011-10-01 2011-10-01 false Excess land. 426.12 Section 426.12 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION, DEPARTMENT OF THE INTERIOR ACREAGE LIMITATION RULES AND REGULATIONS § 426.12 Excess land. (a) The process of designating excess and...

  2. 10 CFR 904.9 - Excess capacity.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Excess capacity. 904.9 Section 904.9 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.9 Excess capacity. (a) If the Uprating Program results in Excess Capacity, Western shall...

  3. 10 CFR 904.9 - Excess capacity.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Excess capacity. 904.9 Section 904.9 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.9 Excess capacity. (a) If the Uprating Program results in Excess Capacity, Western shall...

  4. 10 CFR 904.9 - Excess capacity.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Excess capacity. 904.9 Section 904.9 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.9 Excess capacity. (a) If the Uprating Program results in Excess Capacity, Western shall...

  5. 12 CFR 1263.23 - Excess stock.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Excess stock. 1263.23 Section 1263.23 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS MEMBERS OF THE BANKS Stock Requirements § 1263.23 Excess stock. (a) Sale of excess stock. Subject to the restriction in paragraph (b) of...

  6. 12 CFR 1263.23 - Excess stock.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Excess stock. 1263.23 Section 1263.23 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS MEMBERS OF THE BANKS Stock Requirements § 1263.23 Excess stock. (a) Sale of excess stock. Subject to the restriction in paragraph (b) of...

  7. 12 CFR 1263.23 - Excess stock.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Excess stock. 1263.23 Section 1263.23 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS MEMBERS OF THE BANKS Stock Requirements § 1263.23 Excess stock. (a) Sale of excess stock. Subject to the restriction in paragraph (b)...

  8. 12 CFR 1263.23 - Excess stock.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Excess stock. 1263.23 Section 1263.23 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS MEMBERS OF THE BANKS Stock Requirements § 1263.23 Excess stock. (a) Sale of excess stock. Subject to the restriction in paragraph (b) of...

  9. Accurate calculation of diffraction-limited encircled and ensquared energy.

    PubMed

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented.

  10. Changing guards: time to move beyond body mass index for population monitoring of excess adiposity.

    PubMed

    Tanamas, S K; Lean, M E J; Combet, E; Vlassopoulos, A; Zimmet, P Z; Peeters, A

    2016-07-01

    With the obesity epidemic, and the effects of aging populations, human phenotypes have changed over two generations, possibly more dramatically than in other species previously. As obesity is an important and growing hazard for population health, we recommend a systematic evaluation of the optimal measure(s) for population-level excess body fat. Ideal measure(s) for monitoring body composition and obesity should be simple, as accurate and sensitive as possible, and provide good categorization of related health risks. Combinations of anthropometric markers or predictive equations may facilitate better use of anthropometric data than single measures to estimate body composition for populations. Here, we provide new evidence that increasing proportions of aging populations are at high health-risk according to waist circumference, but not body mass index (BMI), so continued use of BMI as the principal population-level measure substantially underestimates the health-burden from excess adiposity. © The Author 2015. Published by Oxford University Press on behalf of the Association of Physicians. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Thermoluminescence and excess 226Ra decay dating of late Quaternary fluvial sands, East Alligator River, Australia

    NASA Astrophysics Data System (ADS)

    Murray, Andrew; Wohl, Ellen; East, Jon

    1992-01-01

    Thermoluminescence (TL) dating was applied to seven samples of siliceous fluvial sands from the East Alligator River of Northern Australia, giving ages ranging from modern to 6000 yr B.P. Two methods of estimating the equivalent dose (ED), total bleach and regenerative, were applied to the 90- to 125-μm quartz fraction of the samples in order to determine the reliability and internal consistency of the technique. High-resolution γ and α spectroscopy were used to measure radionuclide contents; these measurements revealed an excess 226Ra activity compared with 230Th. This excess decreased with depth, and was used directly to derive mean sedimentation rates, and thus sediment ages. Both this method and one 14C date confirmed the validity of the TL values, which increased systematically with depth and were consistent with site stratigraphy. TL was of limited use in the dating of these late Holocene deposits because of age uncertainties of 500 to 1600 yr, resulting from a significant residual ED. This residual probably resulted from incomplete bleaching during reworking upstream of the sampling site. For Pleistocene deposits, the residual ED will be less significant because of higher total EDs, and TL dates will be correspondingly more accurate.

  12. Average Potential Temperature of the Upper Mantle and Excess Temperatures Beneath Regions of Active Upwelling

    NASA Astrophysics Data System (ADS)

    Putirka, K. D.

    2006-05-01

    The question as to whether any particular oceanic island is the result of a thermal mantle plume, is a question of whether volcanism is the result of passive upwelling, as at mid-ocean ridges, or active upwelling, driven by thermally buoyant material. When upwelling is passive, mantle temperatures reflect average or ambient upper mantle values. In contrast, sites of thermally driven active upwellings will have elevated (or excess) mantle temperatures, driven by some source of excess heat. Skeptics of the plume hypothesis suggest that the maximum temperatures at ocean islands are similar to maximum temperatures at mid-ocean ridges (Anderson, 2000; Green et al., 2001). Olivine-liquid thermometry, when applied to Hawaii, Iceland, and global MORB, belie this hypothesis. Olivine-liquid equilibria provide the most accurate means of estimating mantle temperatures, which are highly sensitive to the forsterite (Fo) contents of olivines, and the FeO content of coexisting liquids. Their application shows that mantle temperatures in the MORB source region are less than temperatures at both Hawaii and Iceland. The Siqueiros Transform may provide the most precise estimate of TpMORB because high MgO glass compositions there have been affected only by olivine fractionation, so primitive FeOliq is known; olivine thermometry yields TpSiqueiros = 1430 ±59°C. A global database of 22,000 MORB show that most MORB have slightly higher FeOliq than at Siqueiros, which translates to higher calculated mantle potential temperatures. If the values for Fomax (= 91.5) and KD (Fe-Mg)ol-liq (= 0.29) at Siqueiros apply globally, then upper mantle Tp is closer to 1485 ± 59°C. Averaging this global estimate with that recovered at Siqueiros yields TpMORB = 1458 ± 78°C, which is used to calculate plume excess temperatures, Te. The estimate for TpMORB defines the convective mantle geotherm, and is consistent with estimates from sea floor bathymetry and heat flow (Stein and Stein, 1992), and

  13. Accurate Stellar Parameters for Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  14. Excess properties for 1-butanethiol + heptane, + cyclohexane, + benzene, and + toluene. 2. Excess molar enthalpies at 283.15, 298.15, and 333.15 K

    SciTech Connect

    Allred, G.C.; Beets, J.W.; Parrish, W.R.

    1995-09-01

    Thiols (mercaptans) are industrially important because of their occurrence in petroleum, their use as chemical intermediates, and their involvement in environmental problems. Excess molar enthalpies of binary mixtures of 1-butanethiol + heptane, + cyclohexane, + benzene, or + toluene have been determined at 283.15, 298.15, 333.15 K with a flow mixing calorimeter, and at 283.15 and 298.15 K with a titration calorimeter. Partial molar enthalpies have been derived from the titration calorimetric results. Where results were obtained by both methods, they were combined to obtain the best estimate of excess enthalpy for all compositions. Equimolar excess enthalpies for 1-butanethiol + heptane or + cyclohexane are endothermic and are comparable to the equimolar excess enthalpies for 1-butanol + heptane or + cyclohexane. Excess enthalpies of 1-butanethiol + alkane systems, which is contrary to the trend observed in 1-butanol + aromatic systems compared to 1-butanol + alkane systems. The excess enthalpy of 1-butanethiol + toluene is weakly exothermic.

  15. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    SciTech Connect

    Dhabal, Debdas; Chakravarty, Charusita; Nguyen, Andrew Huy; Molinero, Valeria; Singh, Murari; Khatua, Prabir; Bandyopadhyay, Sanjoy

    2015-10-28

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW{sub 16}). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW{sub 20}), silicon (SW{sub 21}), and water (SW{sub 23.15} or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by S{sub trip

  16. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    NASA Astrophysics Data System (ADS)

    Dhabal, Debdas; Nguyen, Andrew Huy; Singh, Murari; Khatua, Prabir; Molinero, Valeria; Bandyopadhyay, Sanjoy; Chakravarty, Charusita

    2015-10-01

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW16). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW20), silicon (SW21), and water (SW23.15 or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by Strip, is also studied. Strip is a

  17. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids.

    PubMed

    Dhabal, Debdas; Nguyen, Andrew Huy; Singh, Murari; Khatua, Prabir; Molinero, Valeria; Bandyopadhyay, Sanjoy; Chakravarty, Charusita

    2015-10-28

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW16). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW20), silicon (SW21), and water (SW(23.15) or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by S(trip), is also studied. S

  18. Androgen excess in cystic acne.

    PubMed

    Marynick, S P; Chakmakjian, Z H; McCaffree, D L; Herndon, J H

    1983-04-28

    We measured hormone levels in 59 women and 32 men with longstanding cystic acne resistant to conventional therapy. Affected women had higher serum levels of dehydroepiandrosterone sulfate, testosterone, and luteinizing hormone and lower levels of sex-hormone-binding globulin than controls. Affected men had higher levels of serum dehydroepiandrosterone sulfate and 17-hydroxyprogesterone and lower levels of sex-hormone-binding globulin than controls. To lower dehydroepiandrosterone sulfate, dexamethasone was given to men, and dexamethasone or an oral contraceptive pill, Demulen (or both), was given to women. Of the patients treated for six months, 97 per cent of the women and 81 per cent of the men had resolution or marked improvement in their acne. The dose of dexamethasone required to reduce dehydroepiandrosterone sulfate levels was low, rarely exceeding the equivalent of 20 mg of hydrocortisone per day. We conclude that most patients with therapeutically resistant cystic acne have androgen excess and that lowering elevated dehydroepiandrosterone sulfate results in improvement or remission of acne in most instances.

  19. 41 CFR 102-36.305 - May we abandon or destroy excess personal property without reporting it to GSA?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... written determination that the property has no commercial value or the estimated cost of its continued... destroy excess personal property without reporting it to GSA? 102-36.305 Section 102-36.305 Public... MANAGEMENT REGULATION PERSONAL PROPERTY 36-DISPOSITION OF EXCESS PERSONAL PROPERTY Disposition of Excess...

  20. Patterns of Excess Cancer Risk among the Atomic Bomb Survivors

    NASA Astrophysics Data System (ADS)

    Pierce, Donald A.

    1996-05-01

    I will indicate the major epidemiological findings regarding excess cancer among the atomic-bomb survivors, with some special attention to what can be said about low-dose risks. This will be based on 1950--90 mortality follow-up of about 87,000 survivors having individual radiation dose estimates. Of these about 50,000 had doses greater than 0.005 Sv, and the remainder serve largely as a comparison group. It is estimated that for this cohort there have been about 400 excess cancer deaths among a total of about 7800. Since there are about 37,000 subjects in the dose range .005--.20 Sv, there is substantial low-dose information in this study. The person-year-Seivert for the dose range under .20 Sv is greater than for any one of the 6 study cohorts of U.S., Canadian, and U.K. nuclear workers; and is equal to about 60% of the total for the combined cohorts. It is estimated, without linear extrapolation from higher doses, that for the RERF cohort there have been about 100 excess cancer deaths in the dose range under .20 Sv. Both the dose-response and age-time patterns of excess risk are very different for solid cancers and leukemia. One of the most important findings has been that the solid cancer (absolute) excess risk has steadily increased over the entire follow-up to date, similarly to the age-increase of the background risk. About 25% of the excess solid cancer deaths occurred in the last 5 years of the 1950--90 follow-up. On the contrary most of the excess leukemia risk occurred in the first few years following exposure. The observed dose response for solid cancers is very linear up to about 3 Sv, whereas for leukemia there is statistically significant upward curvature on that range. Very little has been proposed to explain this distinction. Although there is no hint of upward curvature or a threshold for solid cancers, the inherent difficulty of precisely estimating very small risks along with radiobiological observations that many radiation effects are nonlinear

  1. Accurate estimation of the elastic properties of porous fibers

    SciTech Connect

    Thissell, W.R.; Zurek, A.K.; Addessio, F.

    1997-05-01

    A procedure is described to calculate polycrystalline anisotropic fiber elastic properties with cylindrical symmetry and porosity. It uses a preferred orientation model (Tome ellipsoidal self-consistent model) for the determination of anisotropic elastic properties for the case of highly oriented carbon fibers. The model predictions, corrected for porosity, are compared to back-calculated fiber elastic properties of an IM6/3501-6 unidirectional composite whose elastic properties have been determined via resonant ultrasound spectroscopy. The Halpin-Tsai equations used to back-calculated fiber elastic properties are found to be inappropriate for anisotropic composite constituents. Modifications are proposed to the Halpin-Tsai equations to expand their applicability to anisotropic reinforcement materials.

  2. Androgen excess: Investigations and management.

    PubMed

    Lizneva, Daria; Gavrilova-Jordan, Larisa; Walker, Walidah; Azziz, Ricardo

    2016-11-01

    Androgen excess (AE) is a key feature of polycystic ovary syndrome (PCOS) and results in, or contributes to, the clinical phenotype of these patients. Although AE will contribute to the ovulatory and menstrual dysfunction of these patients, the most recognizable sign of AE includes hirsutism, acne, and androgenic alopecia or female pattern hair loss (FPHL). Evaluation includes not only scoring facial and body terminal hair growth using the modified Ferriman-Gallwey method but also recording and possibly scoring acne and alopecia. Moreover, assessment of biochemical hyperandrogenism is necessary, particularly in patients with unclear or absent hirsutism, and will include assessing total and free testosterone (T), and possibly dehydroepiandrosterone sulfate (DHEAS) and androstenedione, although these latter contribute limitedly to the diagnosis. Assessment of T requires use of the highest quality assays available, generally radioimmunoassays with extraction and chromatography or mass spectrometry preceded by liquid or gas chromatography. Management of clinical hyperandrogenism involves primarily either androgen suppression, with a hormonal combination contraceptive, or androgen blockade, as with an androgen receptor blocker or a 5α-reductase inhibitor, or a combination of the two. Medical treatment should be combined with cosmetic treatment including topical eflornithine hydrochloride and short-term (shaving, chemical depilation, plucking, threading, waxing, and bleaching) and long-term (electrolysis, laser therapy, and intense pulse light therapy) cosmetic treatments. Generally, acne responds to therapy relatively rapidly, whereas hirsutism is slower to respond, with improvements observed as early as 3 months, but routinely only after 6 or 8 months of therapy. Finally, FPHL is the slowest to respond to therapy, if it will at all, and it may take 12 to 18 months of therapy for an observable response.

  3. 32 CFR 644.475 - Excessing Army military and Air Force property.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the Related Land) § 644.475 Excessing Army military and Air Force property. The procedures for placing buildings and improvements in excess status are set forth in AR 405-90 and AFR 87-4. In instances of land... Command to furnish an estimate of the value of buildings and improvements for the purpose of...

  4. Computationally efficient variable resolution depth estimation

    NASA Astrophysics Data System (ADS)

    Calder, B. R.; Rice, G.

    2017-09-01

    A new algorithm for data-adaptive, large-scale, computationally efficient estimation of bathymetry is proposed. The algorithm uses a first pass over the observations to construct a spatially varying estimate of data density, which is then used to predict achievable estimate sample spacing for robust depth estimation across the area of interest. A low-resolution estimate of depth is also constructed during the first pass as a guide for further work. A piecewise-regular grid is then constructed following the sample spacing estimates, and accurate depth is finally estimated using the composite refined grid and an extended and re-implemented version of the CUBE algorithm. Resource-efficient data structures allow for the algorithm to operate over large areas and large datasets without excessive compute resources; modular design allows for more complex spatial representations to be included if required. The proposed system is demonstrated on a pair of hydrographic datasets, illustrating the adaptation of the algorithm to different depth- and sensor-driven data densities. Although the algorithm was designed for bathymetric estimation, it could be readily used on other two dimensional scalar fields where variable data density is a driver.

  5. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  6. Determining site index accurately in even-aged stands

    Treesearch

    Gayne G. Erdmann; Ralph M., Jr. Peterson

    1992-01-01

    Good site index estimates are necessary for intensive forest management. To get tree age used in determining site index, increment cores are commonly used. The diffuse-porous rings of northern hardwoods, though, are difficult to count in cores, so many site index estimates are imprecise. Also, measuring the height of standing trees is more difficult and less accurate...

  7. Excess noise in tunable diode lasers

    NASA Technical Reports Server (NTRS)

    Rowland, C. W.

    1981-01-01

    The method and the apparatus for identifying excess-noise regions in tunable diode lasers are described. These diode lasers exhibit regions of excess noise as their wavelength is tuned. If a tunable diode laser is to be used as a local oscillator in a superheterodyne optical receiver, these excess-noise regions severely degrade the performance of the receiver. Measurement results for several tunable diode lasers are given. These results indicate that excess noise is not necessarily associated with a particular wavelength, and that it is possible to select temperature and injection current such that the most ideal performance is achieved.

  8. Initial report on characterization of excess highly enriched uranium

    SciTech Connect

    1996-07-01

    DOE`s Office of Fissile Materials Disposition assigned to this Y-12 division the task of preparing a report on the 174.4 metric tons of excess highly enriched U. Characterization included identification by category, gathering existing data (assay), defining the likely needed processing steps for prepping for transfer to a blending site, and developing a range of preliminary cost estimates for those steps. Focus is on making commercial reactor fuel as a final disposition path.

  9. [Excess mortality associated with influenza in Spain in winter 2012].

    PubMed

    León-Gómez, Inmaculada; Delgado-Sanz, Concepción; Jiménez-Jorge, Silvia; Flores, Víctor; Simón, Fernando; Gómez-Barroso, Diana; Larrauri, Amparo; de Mateo Ontañón, Salvador

    2015-01-01

    An excess of mortality was detected in Spain in February and March 2012 by the Spanish daily mortality surveillance system and the «European monitoring of excess mortality for public health action» program. The objective of this article was to determine whether this excess could be attributed to influenza in this period. Excess mortality from all causes from 2006 to 2012 were studied using time series in the Spanish daily mortality surveillance system, and Poisson regression in the European mortality surveillance system, as well as the FluMOMO model, which estimates the mortality attributable to influenza. Excess mortality due to influenza and pneumonia attributable to influenza were studied by a modification of the Serfling model. To detect the periods of excess, we compared observed and expected mortality. In February and March 2012, both the Spanish daily mortality surveillance system and the European mortality surveillance system detected a mortality excess of 8,110 and 10,872 deaths (mortality ratio (MR): 1.22 (95% CI:1.21-1.23) and 1.32 (95% CI: 1.29-1.31), respectively). In the 2011-12 season, the FluMOMO model identified the maximum percentage (97%) of deaths attributable to influenza in people older than 64 years with respect to the mortality total associated with influenza (13,822 deaths). The rate of excess mortality due to influenza and pneumonia and respiratory causes in people older than 64 years, obtained by the Serfling model, also reached a peak in the 2011-2012 season: 18.07 and 77.20, deaths per 100,000 inhabitants, respectively. A significant increase in mortality in elderly people in Spain was detected by the Spanish daily mortality surveillance system and by the European mortality surveillance system in the winter of 2012, coinciding with a late influenza season, with a predominance of the A(H3N2) virus, and a cold wave in Spain. This study suggests that influenza could have been one of the main factors contributing to the mortality excess

  10. Damages and Expected Deaths Due to Excess NOx Emissions from 2009 to 2015 Volkswagen Diesel Vehicles.

    PubMed

    Holland, Stephen P; Mansur, Erin T; Muller, Nicholas Z; Yates, Andrew J

    2016-02-02

    We estimate the damages and expected deaths in the United States due to excess emissions of NOx from 2009 to 2015 Volkswagen diesel vehicles. Using data on vehicle registrations and a model of pollution transport and valuation, we estimate excess damages of $430 million and 46 excess expected deaths. Accounting for uncertainty about emissions gives a range for damages from $350 million to $500 million, and a range for excess expected deaths from 40 to 52. Our estimates incorporate significant local heterogeneity: for example, Minneapolis has the highest damages despite having fewer noncompliant vehicles than 13 other cities. Our estimated damages greatly exceed possible benefits from reduced CO2 emissions due to increased fuel economy.

  11. 7 CFR 29.1016 - Excessively scorched.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Excessively scorched. 29.1016 Section 29.1016..., 13, 14 and Foreign Type 92) § 29.1016 Excessively scorched. As applied to flue-cured tobacco, the... percent of unripe tobacco. [51 FR 25027, July 10, 1986] ...

  12. 34 CFR 668.166 - Excess cash.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cash Management § 668.166 Excess cash. (a... than Federal Perkins Loan Program funds, that an institution does not disburse to students or parents... funds that an institution receives from the Secretary under the just-in-time payment method. (b) Excess...

  13. 34 CFR 668.166 - Excess cash.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cash Management § 668.166 Excess cash. (a... than Federal Perkins Loan Program funds, that an institution does not disburse to students or parents... funds that an institution receives from the Secretary under the just-in-time payment method. (b) Excess...

  14. Bladder calculus presenting as excessive masturbation.

    PubMed

    De Alwis, A C D; Senaratne, A M R D; De Silva, S M P D; Rodrigo, V S D

    2006-09-01

    Masturbation in childhood is a normal behaviour which most commonly begins at 2 months of age, and peaks at 4 years and in adolescence. However excessive masturbation causes anxiety in parents. We describe a boy with a bladder calculus presenting as excessive masturbation.

  15. 24 CFR 236.60 - Excess Income.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... mortgagor owes prior Excess Income and is not current in payments under a HUD-approved Workout or Repayment... current in payments under a HUD-approved Workout or Repayment Agreement or the mortgagor falls within any... of Excess Income that was: (i) Repaid in accordance with a Workout or Repayment Agreement with...

  16. Part B Excess Cost Quick Reference Document

    ERIC Educational Resources Information Center

    Ball, Wayne; Beridon, Virginia; Hamre, Kent; Morse, Amanda

    2011-01-01

    This Quick Reference Document has been prepared by the Regional Resource Center Program ARRA/Fiscal Priority Team to aid RRCP State Liaisons and other (Technical Assistance) TA providers in understanding the general context of state questions surrounding excess cost. As a "first-stop" for TA providers in investigating excess cost…

  17. 30 CFR 57.6902 - Excessive temperatures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Excessive temperatures. 57.6902 Section 57.6902... Requirements-Surface and Underground § 57.6902 Excessive temperatures. (a) Where heat could cause premature... shall— (1) Measure an appropriate number of blasthole temperatures in order to assess the specific mine...

  18. 30 CFR 56.6902 - Excessive temperatures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Excessive temperatures. 56.6902 Section 56.6902... Requirements § 56.6902 Excessive temperatures. (a) Where heat could cause premature detonation, explosive... an appropriate number of blasthole temperatures in order to assess the specific mine conditions prior...

  19. 7 CFR 929.59 - Excess cranberries.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Excess cranberries. 929.59 Section 929.59 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE CRANBERRIES GROWN IN STATES OF... LONG ISLAND IN THE STATE OF NEW YORK Order Regulating Handling Regulations § 929.59 Excess cranberries...

  20. 7 CFR 929.59 - Excess cranberries.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess cranberries. 929.59 Section 929.59 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE CRANBERRIES GROWN IN STATES OF... LONG ISLAND IN THE STATE OF NEW YORK Order Regulating Handling Regulations § 929.59 Excess cranberries...

  1. 7 CFR 929.59 - Excess cranberries.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Excess cranberries. 929.59 Section 929.59 Agriculture... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE CRANBERRIES GROWN IN STATES OF... LONG ISLAND IN THE STATE OF NEW YORK Order Regulating Handling Regulations § 929.59 Excess cranberries...

  2. 7 CFR 929.59 - Excess cranberries.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Excess cranberries. 929.59 Section 929.59 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE CRANBERRIES GROWN IN STATES OF... LONG ISLAND IN THE STATE OF NEW YORK Order Regulating Handling Regulations § 929.59 Excess cranberries...

  3. 7 CFR 929.59 - Excess cranberries.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Excess cranberries. 929.59 Section 929.59 Agriculture... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE CRANBERRIES GROWN IN STATES OF... LONG ISLAND IN THE STATE OF NEW YORK Order Regulating Handling Regulations § 929.59 Excess cranberries...

  4. 10 CFR 904.9 - Excess capacity.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power... entitled to such Excess Capacity to integrate the operation of the Boulder City Area Projects and other Federal Projects on the Colorado River. Specific criteria for the use of Excess Capacity by Western will...

  5. Advances in Glomerular Filtration Rate Estimating Equations

    PubMed Central

    Stevens, Lesley A; Padala, Smita; Levey, Andrew S

    2011-01-01

    Purpose of review Estimated GFR is now commonly reported by clinical laboratories. Here we review the performance of current creatinine and cystatin C based estimating equations as well as demonstration of their utility in public health and clinical practice. Recent findings Lower levels of GFR are associated with multiple adverse outcomes, including acute kidney injury and medical errors. The new CKD-EPI equation improves performance and risk prediction compared to the MDRD Study equation. Current cystatin C based equations are not accurate in all populations, even in those with reduced muscle mass or chronic illness, where cystatin C would be expected to outperform creatinine. eGFR reporting has led to a greater number of referrals to nephrologists, but the increased numbers do not appear to be excessive or burdensome The MDRD Study equation appears to be able to provide drug dosage adjustments similar to the Cockcroft and Gault. Summary Estimated GFR and their reporting can improve and facilitate clinical practice for chronic kidney disease. Understanding strengths and limitations facilitates their optimal use. Endogenous filtration markers, alone or in combination, that less dependent on non GFR determinants of the filtration markers are necessary to lead to more accurate estimated GFR. PMID:20393287

  6. 75 FR 27572 - Monthly Report of Excess Income and Annual Report of Uses of Excess Income

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-17

    ... URBAN DEVELOPMENT Monthly Report of Excess Income and Annual Report of Uses of Excess Income AGENCY... and Annual Report of Uses of Excess Income. OMB Approval Number: 2502-0086. Form Numbers: None--form... INFORMATION CONTACT: Leroy McKinney Jr., Reports Management Officer, QDAM, Department of Housing and...

  7. Cardiovascular investigations of airline pilots with excessive cardiovascular risk.

    PubMed

    Wirawan, I Made Ady; Aldington, Sarah; Griffiths, Robin F; Ellis, Chris J; Larsen, Peter D

    2013-06-01

    This study examined the prevalence of airline pilots who have an excessive cardiovascular disease (CVD) risk score according to the New Zealand Guideline Group (NZGG) Framingham-based Risk Chart and describes their cardiovascular risk assessment and investigations. A cross-sectional study was performed among 856 pilots employed in an Oceania based airline. Pilots with elevated CVD risk that had been previously evaluated at various times over the previous 19 yr were reviewed retrospectively from the airline's medical records, and the subsequent cardiovascular investigations were then described. There were 30 (3.5%) pilots who were found to have 5-yr CVD risk score of 10-15% or higher. Of the 29 pilots who had complete cardiac investigations data, 26 pilots underwent exercise electrocardiography (ECG), 2 pilots progressed directly to coronary angiograms and 1 pilot with abnormal echocardiogram was not examined further. Of the 26 pilots, 7 had positive or borderline exercise tests, all of whom subsequently had angiograms. One patient with a negative exercise test also had a coronary angiogram. Of the 9 patients who had coronary angiograms as a consequence of screening, 5 had significant disease that required treatment and 4 had either trivial disease or normal coronary arteries. The current approach to investigate excessive cardiovascular risk in pilots relies heavily on exercise electrocardiograms as a diagnostic test, and may not be optimal either to detect disease or to protect pilots from unnecessary invasive procedures. A more comprehensive and accurate cardiac investigation algorithm to assess excessive CVD risk in pilots is required.

  8. Millisecond Pulsars and the Galactic Center Excess

    NASA Astrophysics Data System (ADS)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice; Ferrara, Elizabeth C.

    2017-08-01

    Various groups including the Fermi team have confirmed the spectrum of the gamma- ray excess in the Galactic Center (GCE). While some authors interpret the GCE as evidence for the annihilation of dark matter (DM), others have pointed out that the GCE spectrum is nearly identical to the average spectrum of Fermi millisecond pul- sars (MSP). Assuming the Galactic Center (GC) is populated by a yet unobserved source of MSPs that has similar properties to that of MSPs in the Galactic Disk (GD), we present results of a population synthesis of MSPs from the GC. We establish parameters of various models implemented in the simulation code by matching characteristics of 54 detected Fermi MSPs in the first point source catalog and 92 detected radio MSPs in a select group of thirteen radio surveys and targeting a birth rate of 45 MSPs per mega-year. As a check of our simulation, we find excellent agreement with the estimated numbers of MSPs in eight globular clusters. In order to reproduce the gamma-ray spectrum of the GCE, we need to populate the GC with 10,000 MSPs having a Navarro-Frenk-White distribution suggested by the halo density of DM. It may be possible for Fermi to detect some of these MSPs in the near future; the simulation also predicts that many GC MSPs have radio fluxes S1400above 10 �μJy observable by future pointed radio observations. We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  9. Excessive crying in infants with regulatory disorders.

    PubMed

    Maldonado-Duran, M; Sauceda-Garcia, J M

    1996-01-01

    The authors point out a correlation between regulatory disorders in infants and the problem of excessive crying. The literature describes other behavioral problems involving excessive crying in very young children, but with little emphasis on this association. The recognition and diagnosis of regulatory disorders in infants who cry excessively can help practitioners design appropriate treatment interventions. Understanding these conditions can also help parents tailor their caretaking style, so that they provide appropriate soothing and stimulation to their child. In so doing, they will be better able to develop and preserve a satisfactory parent-child relationship, as well as to maintain their own sense of competence and self-esteem as parents.

  10. Genetics Home Reference: aromatase excess syndrome

    MedlinePlus

    ... Sources for This Page Fukami M, Shozu M, Ogata T. Molecular bases and phenotypic determinants of aromatase ... T, Nishigaki T, Yokoya S, Binder G, Horikawa R, Ogata T. Aromatase excess syndrome: identification of cryptic duplications ...

  11. 34 CFR 300.16 - Excess costs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are in excess of the average annual per-student expenditure in an LEA during the preceding school year for an elementary school or secondary school student, as may be appropriate, and that must be...

  12. Phospholipids as Biomarkers for Excessive Alcohol Use

    DTIC Science & Technology

    2014-10-01

    excessive alcohol use ( EAU ); a rising epidemic reported to be as high as 40% among returning veterans. Drinking becomes excessive when it causes or...contributor to the onset and exacerbation of EAU . The prevalence of EAU is alarming, and the vigilance and action to identify veterans with EAU is...of importance. The consequences of under-detection of EAU , thus delayed intervention are serious because relative risk of alcohol-related health

  13. EFFECTS OF CHRONIC EXCESS SALT FEEDING

    PubMed Central

    Dahl, Lewis K.; Heine, Martha

    1961-01-01

    Female rats were fed diets containing either excess sea salt or excess sodium chloride for periods up to 14 months. The hypertension produced by sea salt was more pronounced than that caused by sodium chloride alone, although the average amount of sodium chloride contained in the sea salt feeding was slightly less. The ions involved in this incremental effect of sea salt were not identified. PMID:13719314

  14. An Accurate D0 value for SiF

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James O. (Technical Monitor)

    1997-01-01

    A highly accurate D0 value is determined for SiF using the CCSD(T) approach in conjunction with basis set extrapolation. The result includes the effect of spin-orbit coupling and core-valence correlation. Our best estimate for D0 is 141.3 kcal/mol, which we estimate to have an uncertainty of 0.5 kcal/mol and must be accurate to 1.0 kcal/mol. This value is significantly larger than experiment and slightly larger than previous calculations.

  15. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  16. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  17. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  19. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  20. 3D Volumetry and its Correlation Between Postoperative Gastric Volume and Excess Weight Loss After Sleeve Gastrectomy.

    PubMed

    Hanssen, Andrés; Plotnikov, Sergio; Acosta, Geylor; Nuñez, José Tomas; Haddad, José; Rodriguez, Carmen; Petrucci, Claudia; Hanssen, Diego; Hanssen, Rafael

    2017-09-15

    The volume of the postoperative gastric remnant is a key factor in excess weight loss (EWL) after sleeve gastrectomy (SG). Traditional methods to estimate gastric volume (GV) after bariatric procedures are often inaccurate; usually conventional biplanar contrast studies are used. Thirty patients who underwent SG were followed prospectively and evaluated at 6 months after the surgical procedure, performing 3D CT reconstruction and gastric volumetry, to establish its relationship with EWL. The gastric remnant was distended with effervescent sodium bicarbonate given orally. Helical CT images were acquired and reconstructed; GV was estimated with the software of the CT device. The relationship between GV and EWL was analyzed. The study allowed estimating the GV in all patients. A dispersion diagram showed an inverse relationship between GV and %EWL. 55.5% of patients with GV ≤ 100 ml had %EWL 25-75% and 38.8% had an %EWL above 75% and patients with GV ≥ 100 ml had an %EWL under 25% (50% of patients) or between 25 and 75% (50% of this group). The Pearson's correlation coefficient was R = 6.62, with bilateral significance (p ≤ .01). The Chi-square result correlating GV and EWL showed a significance of .005 (p ≤ .01). The 3D reconstructions showed accurately the shape and anatomic details of the gastric remnant. 3D volumetry CT scans accurately estimate GV after SG. A significant relationship between GV and EWL 6 months after SG was established, seeming that GV ≥ 100 ml at 6 months of SG is associated with poor EWL.

  1. Excess body weight increases the burden of age-associated chronic diseases and their associated health care expenditures

    PubMed Central

    Atella, Vincenzo; Kopinska, Joanna; Medea, Gerardo; Belotti, Federico; Tosti, Valeria; Mortari, Andrea Piano; Cricelli, Claudio; Fontana, Luigi

    2015-01-01

    Aging and excessive adiposity are both associated with an increased risk of developing multiple chronic diseases, which drive ever increasing health costs. The main aim of this study was to determine the net (non‐estimated) health costs of excessive adiposity and associated age‐related chronic diseases. We used a prevalence‐based approach that combines accurate data from the Health Search CSD‐LPD, an observational dataset with patient records collected by Italian general practitioners and up‐to‐date health care expenditures data from the SiSSI Project. In this very large study, 557,145 men and women older than 18 years were observed at different points in time between 2004 and 2010. The proportion of younger and older adults reporting no chronic disease decreased with increasing BMI. After adjustment for age, sex, geographic residence, and GPs heterogeneity, a strong J‐shaped association was found between BMI and total health care costs, more pronounced in middle‐aged and older adults. Relative to normal weight, in the 45‐64 age group, the per‐capita total cost was 10% higher in overweight individuals, and 27 to 68% greater in patients with obesity and very severe obesity, respectively. The association between BMI and diabetes, hypertension and cardiovascular disease largely explained these elevated costs. PMID:26540605

  2. Excess body weight increases the burden of age-associated chronic diseases and their associated health care expenditures.

    PubMed

    Atella, Vincenzo; Kopinska, Joanna; Medea, Gerardo; Belotti, Federico; Tosti, Valeria; Mortari, Andrea Piano; Cricelli, Claudio; Fontana, Luigi

    2015-10-01

    Aging and excessive adiposity are both associated with an increased risk of developing multiple chronic diseases, which drive ever increasing health costs. The main aim of this study was to determine the net (non-estimated) health costs of excessive adiposity and associated age-related chronic diseases. We used a prevalence-based approach that combines accurate data from the Health Search CSD-LPD, an observational dataset with patient records collected by Italian general practitioners and up-to-date health care expenditures data from the SiSSI Project. In this very large study, 557,145 men and women older than 18 years were observed at different points in time between 2004 and 2010. The proportion of younger and older adults reporting no chronic disease decreased with increasing BMI. After adjustment for age, sex, geographic residence, and GPs heterogeneity, a strong J-shaped association was found between BMI and total health care costs, more pronounced in middle-aged and older adults. Relative to normal weight, in the 45-64 age group, the per-capita total cost was 10% higher in overweight individuals, and 27 to 68% greater in patients with obesity and very severe obesity, respectively. The association between BMI and diabetes, hypertension and cardiovascular disease largely explained these elevated costs.

  3. The effect of external dynamic loads on the lifetime of rolling element bearings: accurate measurement of the bearing behaviour

    NASA Astrophysics Data System (ADS)

    Jacobs, W.; Boonen, R.; Sas, P.; Moens, D.

    2012-05-01

    Accurate prediction of the lifetime of rolling element bearings is a crucial step towards a reliable design of many rotating machines. Recent research emphasizes an important influence of external dynamic loads on the lifetime of bearings. However, most lifetime calculations of bearings are based on the classical ISO 281 standard, neglecting this influence. For bearings subjected to highly varying loads, this leads to inaccurate estimations of the lifetime, and therefore excessive safety factors during the design and unexpected failures during operation. This paper presents a novel test rig, developed to analyse the behaviour of rolling element bearings subjected to highly varying loads. Since bearings are very precise machine components, their motion can only be measured in an accurately controlled environment. Otherwise, noise from other components and external influences such as temperature variations will dominate the measurements. The test rig is optimised to perform accurate measurements of the bearing behaviour. Also, the test bearing is fitted in a modular structure, which guarantees precise mounting and allows testing different types and sizes of bearings. Finally, a fully controlled multi-axial static and dynamic load is imposed on the bearing, while its behaviour is monitored with capacitive proximity probes.

  4. Accurate method for computing correlated color temperature.

    PubMed

    Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier

    2016-06-27

    For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 106 K.

  5. Phenomenology and psychopathology of excessive indoor tanning.

    PubMed

    Petit, Aymeric; Karila, Laurent; Chalmin, Florence; Lejoyeux, Michel

    2014-06-01

    Excessive indoor tanning, defined by the presence of an impulse towards and repetition of tanning that leads to personal distress, has only recently been recognized as a psychiatric disorder. This finding is based on the observations of many dermatologists who report the presence of addictive relationships with tanning salons among their patients despite being given diagnoses of malignant melanoma. This article synthesizes the existing literature on excessive indoor tanning and addiction to investigate possible associations. This review focuses on the prevalence, clinical features, etiology, and treatment of this disorder. A literature review was conducted, using PubMed, Google Scholar, EMBASE and PsycINFO, to identify articles published in English from 1974 to 2013. Excessive indoor tanning may be related to addiction, obsessive-compulsive disorder, impulse control disorder, seasonal affective disorder, anorexia, body dysmorphic disorder, or depression. Excessive indoor tanning can be included in the spectrum of addictive behavior because it has clinical characteristics in common with those of classic addictive disorders. It is frequently associated with anxiety, eating disorders, and tobacco dependence. Further controlled studies are required, especially in clinical psychopathology and neurobiology, to improve our understanding of excessive indoor tanning.

  6. Accurate ab Initio Spin Densities.

    PubMed

    Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus

    2012-06-12

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].

  7. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  8. Constraints on cosmic-ray positron excess and average pulsar parameters

    NASA Astrophysics Data System (ADS)

    Grimani, C.

    2007-11-01

    Recent, accurate e^+/(e^++e^-) ratio measurements in cosmic rays allow us to distinguish among different estimates of secondary positron production in the interstellar medium (ISM), provided the effect of solar modulation and solar polarity are properly taken into account. Data above a few GeV indicate that a possible extra component of positrons could be required in addition to the secondaries. This positron excess is compatible with the hypothesis of pair production at the polar cap of mature pulsars. Assuming only pulsar contributions without any exotic contributions such as dark-matter annihilation, the average parameters of Galactic pulsars contributing to positron and electron interstellar fluxes were obtained. These parameter values are found near the peak of the distributions of the observed characteristics of radio pulsars. The studied gamma-ray pulsar sample is too small to make any conclusion. The expected e^+/(e^++e^-) ratio from the PAMELA experiment currently in orbit is reported in this paper. The GLAST mission will allow us to double-check our findings about the role of pair production at the pulsar polar cap and outer gap.

  9. Same-sign dilepton excesses and vector-like quarks

    SciTech Connect

    Chen, Chuan-Ren; Cheng, Hsin-Chia; Low, Ian

    2016-03-15

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b' quark is proposed. In conclusion, we also comment on the possibility to fit excesses in different analyses in a common framework.

  10. Same-sign dilepton excesses and vector-like quarks

    DOE PAGES

    Chen, Chuan-Ren; Cheng, Hsin-Chia; Low, Ian

    2016-03-15

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplifiedmore » model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b' quark is proposed. In conclusion, we also comment on the possibility to fit excesses in different analyses in a common framework.« less

  11. ORIGIN OF EXCESS {sup 176}Hf IN METEORITES

    SciTech Connect

    Thrane, Kristine; Connelly, James N.; Bizzarro, Martin; Meyer, Bradley S.; The, Lih-Sin

    2010-07-10

    After considerable controversy regarding the {sup 176}Lu decay constant ({lambda}{sup 176}Lu), there is now widespread agreement that (1.867 {+-} 0.008) x 10{sup -11} yr{sup -1} as confirmed by various terrestrial objects and a 4557 Myr meteorite is correct. This leaves the {sup 176}Hf excesses that are correlated with Lu/Hf elemental ratios in meteorites older than {approx}4.56 Ga meteorites unresolved. We attribute {sup 176}Hf excess in older meteorites to an accelerated decay of {sup 176}Lu caused by excitation of the long-lived {sup 176}Lu ground state to a short-lived {sup 176m}Lu isomer. The energy needed to cause this transition is ascribed to a post-crystallization spray of cosmic rays accelerated by nearby supernova(e) that occurred after 4564.5 Ma. The majority of these cosmic rays are estimated to penetrate accreted material down to 10-20 m, whereas a small fraction penetrate as deep as 100-200 m, predicting decreased excesses of {sup 176}Hf with depth of burial at the time of the irradiation event.

  12. Excess deaths during the 2004 heatwave in Brisbane, Australia.

    PubMed

    Tong, Shilu; Ren, Cizao; Becker, Niels

    2010-07-01

    The paper examines whether there was an excess of deaths and the relative role of temperature and ozone in a heatwave during 7-26 February 2004 in Brisbane, Australia, a subtropical city accustomed to warm weather. The data on daily counts of deaths from cardiovascular disease and non-external causes, meteorological conditions, and air pollution in Brisbane from 1 January 2001 to 31 October 2004 were supplied by the Australian Bureau of Statistics, Australian Bureau of Meteorology, and Queensland Environmental Protection Agency, respectively. The relationship between temperature and mortality was analysed using a Poisson time series regression model with smoothing splines to control for nonlinear effects of confounding factors. The highest temperature recorded in the 2004 heatwave was 42 degrees C compared with the highest recorded temperature of 34 degrees C during the same periods of 2001-2003. There was a significant relationship between exposure to heat and excess deaths in the 2004 heatwave [estimated increase in non-external deaths: 75 ([95% confidence interval, CI: 11-138; cardiovascular deaths: 41 (95% CI: -2 to 84)]. There was no apparent evidence of substantial short-term mortality displacement. The excess deaths were mainly attributed to temperature but exposure to ozone also contributed to these deaths.

  13. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  14. Effective and Accurate Colormap Selection

    NASA Astrophysics Data System (ADS)

    Thyng, K. M.; Greene, C. A.; Hetland, R. D.; Zimmerle, H.; DiMarco, S. F.

    2016-12-01

    Science is often communicated through plots, and design choices can elucidate or obscure the presented data. The colormap used can honestly and clearly display data in a visually-appealing way, or can falsely exaggerate data gradients and confuse viewers. Fortunately, there is a large resource of literature in color science on how color is perceived which we can use to inform our own choices. Following this literature, colormaps can be designed to be perceptually uniform; that is, so an equally-sized jump in the colormap at any location is perceived by the viewer as the same size. This ensures that gradients in the data are accurately percieved. The same colormap is often used to represent many different fields in the same paper or presentation. However, this can cause difficulty in quick interpretation of multiple plots. For example, in one plot the viewer may have trained their eye to recognize that red represents high salinity, and therefore higher density, while in the subsequent temperature plot they need to adjust their interpretation so that red represents high temperature and therefore lower density. In the same way that a single Greek letter is typically chosen to represent a field for a paper, we propose to choose a single colormap to represent a field in a paper, and use multiple colormaps for multiple fields. We have created a set of colormaps that are perceptually uniform, and follow several other design guidelines. There are 18 colormaps to give options to choose from for intuitive representation. For example, a colormap of greens may be used to represent chlorophyll concentration, or browns for turbidity. With careful consideration of human perception and design principles, colormaps may be chosen which faithfully represent the data while also engaging viewers.

  15. Excess Electron Localization in Solvated DNA Bases

    SciTech Connect

    Smyth, Maeve; Kohanoff, Jorge

    2011-06-10

    We present a first-principles molecular dynamics study of an excess electron in condensed phase models of solvated DNA bases. Calculations on increasingly large microsolvated clusters taken from liquid phase simulations show that adiabatic electron affinities increase systematically upon solvation, as for optimized gas-phase geometries. Dynamical simulations after vertical attachment indicate that the excess electron, which is initially found delocalized, localizes around the nucleobases within a 15 fs time scale. This transition requires small rearrangements in the geometry of the bases.

  16. Photoreceptor damage following exposure to excess riboflavin.

    PubMed

    Eckhert, C D; Hsu, M H; Pang, N

    1993-12-15

    Flavins generate oxidants during metabolism and when exposed to light. Here we report that the photoreceptor layer of retinas from black-eyed rats is reduced in size by a dietary regime containing excess riboflavin. The effect of excess riboflavin was dose-dependent and was manifested by a decrease in photoreceptor length. This decrease was due in part to a reduction in the thickness of the outer nuclear layer, a structure formed from stacked photoreceptor nuclei. These changes were accompanied by an increase in photoreceptor outer segment autofluorescence following illumination at 328 nm, a wavelength that corresponds to the excitation maxima of oxidized lipopigments of the retinal pigment epithelium.

  17. Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies

    SciTech Connect

    Barnes, M.J.

    1998-12-07

    The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variable on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB.

  18. Estimating cull in northern hardwoods

    Treesearch

    W.M. Zillgitt; S.R. Gevorkiantz

    1946-01-01

    Cull in northern hardwood stands is often very heavy and is difficult to estimate. To help clarify this situation and aid the average cruiser to become more accurate in his estimates, the study reported here should prove very helpful.

  19. The effects and underlying mechanism of excessive iodide on excessive fluoride-induced thyroid cytotoxicity.

    PubMed

    Liu, Hongliang; Zeng, Qiang; Cui, Yushan; Yu, Linyu; Zhao, Liang; Hou, Changchun; Zhang, Shun; Zhang, Lei; Fu, Gang; Liu, Yeming; Jiang, Chunyang; Chen, Xuemin; Wang, Aiguo

    2014-07-01

    In many regions, excessive fluoride and excessive iodide coexist in groundwater, which may lead to biphasic hazards to human thyroid. To explore fluoride-induced thyroid cytotoxicity and the mechanism underlying the effects of excessive iodide on fluoride-induced cytotoxicity, a thyroid cell line (Nthy-ori 3-1) was exposed to excessive fluoride and/or excessive iodide. Cell viability, lactate dehydrogenase (LDH) leakage, reactive oxygen species (ROS) formation, apoptosis, and the expression levels of inositol-requiring enzyme 1 (IRE1) pathway-related molecules were detected. Fluoride and/or iodide decreased cell viability and increased LDH leakage and apoptosis. ROS, the expression levels of glucose-regulated protein 78 (GRP78), IRE1, C/EBP homologous protein (CHOP), and spliced X-box-binding protein-1 (sXBP-1) were enhanced by fluoride or the combination of the two elements. Collectively, excessive fluoride and excessive iodide have detrimental influences on human thyroid cells. Furthermore, an antagonistic interaction between fluoride and excessive iodide exists, and cytotoxicity may be related to IRE1 pathway-induced apoptosis.

  20. Infrared excesses in early-type stars - Gamma Cassiopeiae

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.; Erickson, E. F.; Witteborn, F. C.; Strecker, D. W.

    1978-01-01

    Spectrophotometry of the classical Be star Gamma Cas (1-4 microns, with about 2% spectral resolution) is presented. These data, together with existing broad-band observations, are accurately described by simple isothermal LTE models for the IR excess which differ from most previously published work in three ways: (1) hydrogenic bound-free emission is included; (2) the attenuation of the star by the shell is included; and (3) no assumption is made that the shell contribution is negligible in some bandpass. It is demonstrated that the bulk of the IR excess consists of hydrogenic bound-free and free-free emission from a shell of hot ionized hydrogen gas, although a small thermal component cannot be ruled out. The bound-free emission is strong, and the Balmer, Paschen, and Brackett discontinuities are correctly represented by the shell model with physical parameters as follows: a shell temperature of approximately 18,000 K, an optical depth (at 1 micron) of about 0.5, an electron density of approximately 1 trillion per cu cm, and a size of about 2 trillion cm. Phantom shells (i.e., ones which do not alter the observed spectrum of the underlying star) are discussed.

  1. Excess capacity: markets regulation, and values.

    PubMed Central

    Madden, C W

    1999-01-01

    OBJECTIVE: To examine the conceptual bases for the conflicting views of excess capacity in healthcare markets and their application in the context of today's turbulent environment. STUDY SETTING: The policy and research literature of the past three decades. STUDY DESIGN: The theoretical perspectives of alternative economic schools of thought are used to support different policy positions with regard to excess capacity. Changes in these policy positions over time are linked to changes in the economic and political environment of the period. The social values implied by this history are articulated. DATA COLLECTION: Standard library search procedures are used to identify relevant literature. PRINCIPAL FINDINGS: Alternative policy views of excess capacity in healthcare markets rely on differing theoretical foundations. Changes in the context in which policy decisions are made over time affect the dominant theoretical framework and, therefore, the dominant policy view of excess capacity. CONCLUSIONS: In the 1990s, multiple perspectives of optimal capacity still exist. However, our evolving history suggests a set of persistent values that should guide future policy in this area. PMID:10029502

  2. Low excess air operations of oil boilers

    SciTech Connect

    Butcher, T.A.; Celebi, Y.; Litzke, Wai Lin

    1997-09-01

    To quantify the benefits which operation at very low excess air operation may have on heat exchanger fouling BNL has recently started a test project. The test allows simultaneous measurement of fouling rate, flue gas filterable soot, flue gas sulfuric acid content, and flue gas sulfur dioxide.

  3. [Conservative and surgical treatment of convergence excess].

    PubMed

    Ehrt, O

    2016-07-01

    Convergence excess is a common finding especially in pediatric strabismus. A detailed diagnostic approach has to start after full correction of any hyperopia measured in cycloplegia. It includes measurements of manifest and latent deviation at near and distance fixation, near deviation after relaxation of accommodation with addition of +3 dpt, assessment of binocular function with and without +3 dpt as well as the accommodation range. This diagnostic approach is important for the classification into three types of convergence excess, which require different therapeutic approaches: 1) hypo-accommodative convergence excess is treated with permanent bifocal glasses, 2) norm-accommodative patients should be treated with bifocals which can be weaned over years, especially in patients with good stereopsis and 3) non-accommodative convergence excess and patients with large distance deviations need a surgical approach. The most effective operations include those which reduce the muscle torque, e. g. bimedial Faden operations or Y‑splitting of the medial rectus muscles.

  4. [Children's Television Advertising Excesses and Abuses.

    ERIC Educational Resources Information Center

    Choate, Robert B.

    This testimony presents evidence of children's television advertising excesses and abuses. The testimony points out that the average TV-watching child sees more than 22,000 commercials a year, and that on the programs most popular with children large numbers of over-the-counter drugs and hazardous products are advertised. The history of private…

  5. Excessive Positivism in Person-Centered Planning

    ERIC Educational Resources Information Center

    Holburn, Steve; Cea, Christine D.

    2007-01-01

    This paper illustrates the positivistic nature of person-centered planning (PCP) that is evident in the planning methods employed, the way that individuals with disabilities are described, and in portrayal of the outcomes of PCP. However, a confluence of factors can lead to manifestation of excessive positivism that does not serve PCP…

  6. Central Greenland Holocene Deuterium Excess Variability

    NASA Astrophysics Data System (ADS)

    Masson-Delmotte, V.; Jouzel, J.; Falourd, S.; Cattani, O.; Dahl-Jensen, D.; Johnsen, S.; Sveinbjornsdottir, A. E.; White, J. W. C.

    Water stable isotopes (oxygen 18 and deuterium) have been measured along the Holocene part of two deep ice cores from central Greenland, GRIP and North GRIP. Theoretical studies have shown that the second-order isotopic parameter, the deu- terium excess (d=dD-8d18O), is an indicator of climatic changes at the oceanic mois- ture source reflecting at least partly changes in sea-surface-temperature. The two deu- terium excess records from GRIP and North GRIP show a long term increasing trend already observed in Antarctic deep ice cores and related to changes in the Earth's obliquity during the Holocene : an decreased obliquity is associated with a larger low to high latitude annual mean insolation gradient, warmer tropics, colder poles, and a more intense atmospheric transport from the tropics to the poles, resulting in a higher moisture source temperature and higher deuterium excess values. Superimposed onto this long term trend, central Greenland deuterium excess records also exhibit small abrupt events (8.2 ka BP and 4.5 ka BP) and a high frequency variability.

  7. 7 CFR 955.44 - Excess funds.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Excess funds. 955.44 Section 955.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE VIDALIA ONIONS GROWN IN...

  8. 7 CFR 955.44 - Excess funds.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess funds. 955.44 Section 955.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE VIDALIA ONIONS GROWN IN...

  9. 7 CFR 956.44 - Excess funds.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Excess funds. 956.44 Section 956.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE SWEET ONIONS GROWN IN THE WALLA...

  10. 7 CFR 955.44 - Excess funds.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Excess funds. 955.44 Section 955.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE VIDALIA ONIONS GROWN IN...

  11. 7 CFR 955.44 - Excess funds.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Excess funds. 955.44 Section 955.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE VIDALIA ONIONS GROWN IN...

  12. 7 CFR 956.44 - Excess funds.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Excess funds. 956.44 Section 956.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE SWEET ONIONS GROWN IN THE WALLA...

  13. 7 CFR 955.44 - Excess funds.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Excess funds. 955.44 Section 955.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE VIDALIA ONIONS GROWN IN...

  14. 7 CFR 956.44 - Excess funds.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Excess funds. 956.44 Section 956.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE SWEET ONIONS GROWN IN THE WALLA...

  15. 7 CFR 956.44 - Excess funds.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess funds. 956.44 Section 956.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE SWEET ONIONS GROWN IN THE WALLA...

  16. 7 CFR 956.44 - Excess funds.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Excess funds. 956.44 Section 956.44 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE SWEET ONIONS GROWN IN THE WALLA...

  17. Search for excess showers from Crab Nebula

    NASA Technical Reports Server (NTRS)

    Kirov, I. N.; Stamenov, J. N.; Ushev, S. Z.; Janminchev, V. D.; Aseikin, V. S.; Nikolsky, S. I.; Nikolskaja, N. M.; Yakovlev, V. I.; Morozov, A. E.

    1985-01-01

    The arrival directions of muon poor showers registrated in the Tien Shan experiment during an effective running time about I,8.IO(4)h were analyzed. It is shown that there is a significant excess of these showers coming the direction of Crab Nebula.

  18. 34 CFR 300.16 - Excess costs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 2 2014-07-01 2013-07-01 true Excess costs. 300.16 Section 300.16 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION ASSISTANCE TO STATES FOR THE EDUCATION OF CHILDREN...

  19. 34 CFR 300.16 - Excess costs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 2 2013-07-01 2013-07-01 false Excess costs. 300.16 Section 300.16 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION ASSISTANCE TO STATES FOR THE EDUCATION OF CHILDREN...

  20. Excessive Interviews: Listening to Maternal Subjectivity

    ERIC Educational Resources Information Center

    Willink, Kate

    2010-01-01

    In this article, the author revisits an interview with Ava Montalvo--a mother of two living in Albuquerque, New Mexico--which initially confounded her interpretive resources. This reflexive, performative article examines the role of excess as an analytical lens through which to understand maternal subjectivity and elaborates the methodological…