Science.gov

Sample records for accurate quantitative estimates

  1. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  2. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  4. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  5. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  6. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  7. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  8. Micromagnetometer calibration for accurate orientation estimation.

    PubMed

    Zhang, Zhi-Qiang; Yang, Guang-Zhong

    2015-02-01

    Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625

  9. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  10. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  11. Fast and Accurate Detection of Multiple Quantitative Trait Loci

    PubMed Central

    Nettelblad, Carl; Holmgren, Sverker

    2013-01-01

    Abstract We present a new computational scheme that enables efficient and reliable quantitative trait loci (QTL) scans for experimental populations. Using a standard brute-force exhaustive search effectively prohibits accurate QTL scans involving more than two loci to be performed in practice, at least if permutation testing is used to determine significance. Some more elaborate global optimization approaches, for example, DIRECT have been adopted earlier to QTL search problems. Dramatic speedups have been reported for high-dimensional scans. However, since a heuristic termination criterion must be used in these types of algorithms, the accuracy of the optimization process cannot be guaranteed. Indeed, earlier results show that a small bias in the significance thresholds is sometimes introduced. Our new optimization scheme, PruneDIRECT, is based on an analysis leading to a computable (Lipschitz) bound on the slope of a transformed objective function. The bound is derived for both infinite- and finite-size populations. Introducing a Lipschitz bound in DIRECT leads to an algorithm related to classical Lipschitz optimization. Regions in the search space can be permanently excluded (pruned) during the optimization process. Heuristic termination criteria can thus be avoided. Hence, PruneDIRECT has a well-defined error bound and can in practice be guaranteed to be equivalent to a corresponding exhaustive search. We present simulation results that show that for simultaneous mapping of three QTLS using permutation testing, PruneDIRECT is typically more than 50 times faster than exhaustive search. The speedup is higher for stronger QTL. This could be used to quickly detect strong candidate eQTL networks. PMID:23919387

  12. Accurate parameter estimation for unbalanced three-phase system.

    PubMed

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056

  13. Accurate pose estimation using single marker single camera calibration system

    NASA Astrophysics Data System (ADS)

    Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.

  14. Radar Based Quantitative Precipitation Estimation in Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhang, J.; Chang, P.

    2012-12-01

    Accurate high-resolution radar quantitative precipitation estimation (QPE) has shown increasing values in hydrological predictions in the last decade. Such QPEs are especially valuable in complex terrain where rain gauge network is sparse and hard to maintain while flash floods and mudslides are common hazards. Taiwan Central Weather Bureau has deployed four S-band radars to support their flood warning operations in recent years, and a real-time multi-radar QPE system was developed. Evaluations of the real-time system over one-year revealed some underestimation issues in the radar QPE. The current work investigates these issues and develops a series of refinements to the system. The refinements include replacing the general R-Z relationships used in the old system with the local ones, mitigating non-standard beam blockage artifacts based on long-term accumulations, and applying vertical profile of reflectivity (VPR) corrections. The local R-Z relationships were derived from 2D video disdrometer observations of winter stratiform precipitation, meiyu fronts, local convective storms, and typhoons. The VPR correction was applied to reduce radar QPE errors in severely blocked area near the Central Mountain Range (CMR). The new radar QPE system was tested using different precipitation events and showed significant improvements over the old system especially along the CMR.

  15. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  16. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  17. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  18. Accurate and robust estimation of camera parameters using RANSAC

    NASA Astrophysics Data System (ADS)

    Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He

    2013-03-01

    Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.

  19. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  20. Robust ODF smoothing for accurate estimation of fiber orientation.

    PubMed

    Beladi, Somaieh; Pathirana, Pubudu N; Brotchie, Peter

    2010-01-01

    Q-ball imaging was presented as a model free, linear and multimodal diffusion sensitive approach to reconstruct diffusion orientation distribution function (ODF) using diffusion weighted MRI data. The ODFs are widely used to estimate the fiber orientations. However, the smoothness constraint was proposed to achieve a balance between the angular resolution and noise stability for ODF constructs. Different regularization methods were proposed for this purpose. However, these methods are not robust and quite sensitive to the global regularization parameter. Although, numerical methods such as L-curve test are used to define a globally appropriate regularization parameter, it cannot serve as a universal value suitable for all regions of interest. This may result in over smoothing and potentially end up in neglecting an existing fiber population. In this paper, we propose to include an interpolation step prior to the spherical harmonic decomposition. This interpolation based approach is based on Delaunay triangulation provides a reliable, robust and accurate smoothing approach. This method is easy to implement and does not require other numerical methods to define the required parameters. Also, the fiber orientations estimated using this approach are more accurate compared to other common approaches. PMID:21096202

  1. Accurate estimators of correlation functions in Fourier space

    NASA Astrophysics Data System (ADS)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  2. Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion

    PubMed Central

    Yadav, Nagesh; Bleakley, Chris

    2014-01-01

    Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584

  3. Rapid Quantitative Pharmacodynamic Imaging with Bayesian Estimation

    PubMed Central

    Koller, Jonathan M.; Vachon, M. Jonathan; Bretthorst, G. Larry; Black, Kevin J.

    2016-01-01

    We recently described rapid quantitative pharmacodynamic imaging, a novel method for estimating sensitivity of a biological system to a drug. We tested its accuracy in simulated biological signals with varying receptor sensitivity and varying levels of random noise, and presented initial proof-of-concept data from functional MRI (fMRI) studies in primate brain. However, the initial simulation testing used a simple iterative approach to estimate pharmacokinetic-pharmacodynamic (PKPD) parameters, an approach that was computationally efficient but returned parameters only from a small, discrete set of values chosen a priori. Here we revisit the simulation testing using a Bayesian method to estimate the PKPD parameters. This improved accuracy compared to our previous method, and noise without intentional signal was never interpreted as signal. We also reanalyze the fMRI proof-of-concept data. The success with the simulated data, and with the limited fMRI data, is a necessary first step toward further testing of rapid quantitative pharmacodynamic imaging. PMID:27092045

  4. Quantitative Estimation of Tissue Blood Flow Rate.

    PubMed

    Tozer, Gillian M; Prise, Vivien E; Cunningham, Vincent J

    2016-01-01

    The rate of blood flow through a tissue (F) is a critical parameter for assessing the functional efficiency of a blood vessel network following angiogenesis. This chapter aims to provide the principles behind the estimation of F, how F relates to other commonly used measures of tissue perfusion, and a practical approach for estimating F in laboratory animals, using small readily diffusible and metabolically inert radio-tracers. The methods described require relatively nonspecialized equipment. However, the analytical descriptions apply equally to complementary techniques involving more sophisticated noninvasive imaging.Two techniques are described for the quantitative estimation of F based on measuring the rate of tissue uptake following intravenous administration of radioactive iodo-antipyrine (or other suitable tracer). The Tissue Equilibration Technique is the classical approach and the Indicator Fractionation Technique, which is simpler to perform, is a practical alternative in many cases. The experimental procedures and analytical methods for both techniques are given, as well as guidelines for choosing the most appropriate method. PMID:27172960

  5. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  6. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  7. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  8. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  9. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  10. Designer cantilevers for even more accurate quantitative measurements of biological systems with multifrequency AFM

    NASA Astrophysics Data System (ADS)

    Contera, S.

    2016-04-01

    Multifrequency excitation/monitoring of cantilevers has made it possible both to achieve fast, relatively simple, nanometre-resolution quantitative mapping of mechanical of biological systems in solution using atomic force microscopy (AFM), and single molecule resolution detection by nanomechanical biosensors. A recent paper by Penedo et al [2015 Nanotechnology 26 485706] has made a significant contribution by developing simple methods to improve the signal to noise ratio in liquid environments, by selectively enhancing cantilever modes, which will lead to even more accurate quantitative measurements.

  11. How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?

    NASA Astrophysics Data System (ADS)

    Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.

    2002-12-01

    The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in

  12. A correlative imaging based methodology for accurate quantitative assessment of bone formation in additive manufactured implants.

    PubMed

    Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D

    2016-06-01

    A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants. PMID:27153828

  13. Accurate feature detection and estimation using nonlinear and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Rudin, Leonid; Osher, Stanley

    1994-11-01

    A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.

  14. Accurate tempo estimation based on harmonic + noise decomposition

    NASA Astrophysics Data System (ADS)

    Alonso, Miguel; Richard, Gael; David, Bertrand

    2006-12-01

    We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.

  15. Thermal diffusivity estimation with quantitative pulsed phase thermography

    NASA Astrophysics Data System (ADS)

    Ospina-Borras, J. E.; Florez-Ospina, Juan F.; Benitez-Restrepo, H. D.; Maldague, X.

    2015-05-01

    Quantitative Pulsed Phase Thermography (PPT) has been only used to estimate defect parameters such as depth and thermal resistance. Here, we propose a thermal quadrupole based method that extends quantitative pulsed phase thermography. This approach estimates thermal diffusivity by solving a inversion problem based on non-linear squares estimation. This approach is tested with pulsed thermography data acquired from a composite sample. We compare our results with another technique established in time domain. The proposed quantitative analysis with PPT provides estimates of thermal diffusivity close to those obtained with the time domain approach. This estimation requires only the a priori knowledge of sample thickness.

  16. Quantitative estimation in Health Impact Assessment: Opportunities and challenges

    SciTech Connect

    Bhatia, Rajiv; Seto, Edmund

    2011-04-15

    Health Impact Assessment (HIA) considers multiple effects on health of policies, programs, plans and projects and thus requires the use of diverse analytic tools and sources of evidence. Quantitative estimation has desirable properties for the purpose of HIA but adequate tools for quantification exist currently for a limited number of health impacts and decision settings; furthermore, quantitative estimation generates thorny questions about the precision of estimates and the validity of methodological assumptions. In the United States, HIA has only recently emerged as an independent practice apart from integrated EIA, and this article aims to synthesize the experience with quantitative health effects estimation within that practice. We use examples identified through a scan of available identified instances of quantitative estimation in the U.S. practice experience to illustrate methods applied in different policy settings along with their strengths and limitations. We then discuss opportunity areas and practical considerations for the use of quantitative estimation in HIA.

  17. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    USGS Publications Warehouse

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  18. Bioaccessibility tests accurately estimate bioavailability of lead to quail.

    PubMed

    Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S

    2016-09-01

    Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of

  19. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  20. Accurate and molecular-size-tolerant NMR quantitation of diverse components in solution

    PubMed Central

    Okamura, Hideyasu; Nishimura, Hiroshi; Nagata, Takashi; Kigawa, Takanori; Watanabe, Takashi; Katahira, Masato

    2016-01-01

    Determining the amount of each component of interest in a mixture is a fundamental first step in characterizing the nature of the solution and to develop possible means of utilization of its components. Similarly, determining the composition of units in complex polymers, or polymer mixtures, is crucial. Although NMR is recognized as one of the most powerful methods to achieve this and is widely used in many fields, variation in the molecular sizes or the relative mobilities of components skews quantitation due to the size-dependent decay of magnetization. Here, a method to accurately determine the amount of each component by NMR was developed. This method was validated using a solution that contains biomass-related components in which the molecular sizes greatly differ. The method is also tolerant of other factors that skew quantitation such as variation in the one-bond C–H coupling constant. The developed method is the first and only way to reliably overcome the skewed quantitation caused by several different factors to provide basic information on the correct amount of each component in a solution. PMID:26883279

  1. Quantitation and accurate mass analysis of pesticides in vegetables by LC/TOF-MS.

    PubMed

    Ferrer, Imma; Thurman, E Michael; Fernández-Alba, Amadeo R

    2005-05-01

    A quantitative method consisting of solvent extraction followed by liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) analysis was developed for the identification and quantitation of three chloronicotinyl pesticides (imidacloprid, acetamiprid, thiacloprid) commonly used on salad vegetables. Accurate mass measurements within 3 ppm error were obtained for all the pesticides studied in various vegetable matrixes (cucumber, tomato, lettuce, pepper), which allowed an unequivocal identification of the target pesticides. Calibration curves covering 2 orders of magnitude were linear over the concentration range studied, thus showing the quantitative ability of TOF-MS as a monitoring tool for pesticides in vegetables. Matrix effects were also evaluated using matrix-matched standards showing no significant interferences between matrixes and clean extracts. Intraday reproducibility was 2-3% relative standard deviation (RSD) and interday values were 5% RSD. The precision (standard deviation) of the mass measurements was evaluated and it was less than 0.23 mDa between days. Detection limits of the chloronicotinyl insecticides in salad vegetables ranged from 0.002 to 0.01 mg/kg. These concentrations are equal to or better than the EU directives for controlled pesticides in vegetables showing that LC/TOF-MS analysis is a powerful tool for identification of pesticides in vegetables. Robustness and applicability of the method was validated for the analysis of market vegetable samples. Concentrations found in these samples were in the range of 0.02-0.17 mg/kg of vegetable. PMID:15859598

  2. Method for accurate quantitation of background tissue optical properties in the presence of emission from a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2015-03-01

    Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.

  3. Radiologists’ ability to accurately estimate and compare their own interpretative mammography performance to their peers

    PubMed Central

    Cook, Andrea J.; Elmore, Joann G.; Zhu, Weiwei; Jackson, Sara L.; Carney, Patricia A.; Flowers, Chris; Onega, Tracy; Geller, Berta; Rosenberg, Robert D.; Miglioretti, Diana L.

    2013-01-01

    Objective To determine if U.S. radiologists accurately estimate their own interpretive performance of screening mammography and how they compare their performance to their peers’. Materials and Methods 174 radiologists from six Breast Cancer Surveillance Consortium (BCSC) registries completed a mailed survey between 2005 and 2006. Radiologists’ estimated and actual recall, false positive, and cancer detection rates and positive predictive value of biopsy recommendation (PPV2) for screening mammography were compared. Radiologists’ ratings of their performance as lower, similar, or higher than their peers were compared to their actual performance. Associations with radiologist characteristics were estimated using weighted generalized linear models. The study was approved by the institutional review boards of the participating sites, informed consent was obtained from radiologists, and procedures were HIPAA compliant. Results While most radiologists accurately estimated their cancer detection and recall rates (74% and 78% of radiologists), fewer accurately estimated their false positive rate and PPV2 (19% and 26%). Radiologists reported having similar (43%) or lower (31%) recall rates and similar (52%) or lower (33%) false positive rates compared to their peers, and similar (72%) or higher (23%) cancer detection rates and similar (72%) or higher (38%) PPV2. Estimation accuracy did not differ by radiologists’ characteristics except radiologists who interpret ≤1,000 mammograms annually were less accurate at estimating their recall rates. Conclusion Radiologists perceive their performance to be better than it actually is and at least as good as their peers. Radiologists have particular difficulty estimating their false positive rates and PPV2. PMID:22915414

  4. Estimation of Variance Components of Quantitative Traits in Inbred Populations

    PubMed Central

    Abney, Mark; McPeek, Mary Sara; Ober, Carole

    2000-01-01

    Summary Use of variance-component estimation for mapping of quantitative-trait loci in humans is a subject of great current interest. When only trait values, not genotypic information, are considered, variance-component estimation can also be used to estimate heritability of a quantitative trait. Inbred pedigrees present special challenges for variance-component estimation. First, there are more variance components to be estimated in the inbred case, even for a relatively simple model including additive, dominance, and environmental effects. Second, more identity coefficients need to be calculated from an inbred pedigree in order to perform the estimation, and these are computationally more difficult to obtain in the inbred than in the outbred case. As a result, inbreeding effects have generally been ignored in practice. We describe here the calculation of identity coefficients and estimation of variance components of quantitative traits in large inbred pedigrees, using the example of HDL in the Hutterites. We use a multivariate normal model for the genetic effects, extending the central-limit theorem of Lange to allow for both inbreeding and dominance under the assumptions of our variance-component model. We use simulated examples to give an indication of under what conditions one has the power to detect the additional variance components and to examine their impact on variance-component estimation. We discuss the implications for mapping and heritability estimation by use of variance components in inbred populations. PMID:10677322

  5. Bright-field quantitative phase microscopy (BFQPM) for accurate phase imaging using conventional microscopy hardware

    NASA Astrophysics Data System (ADS)

    Jenkins, Micah; Gaylord, Thomas K.

    2015-03-01

    Most quantitative phase microscopy methods require the use of custom-built or modified microscopic configurations which are not typically available to most bio/pathologists. There are, however, phase retrieval algorithms which utilize defocused bright-field images as input data and are therefore implementable in existing laboratory environments. Among these, deterministic methods such as those based on inverting the transport-of-intensity equation (TIE) or a phase contrast transfer function (PCTF) are particularly attractive due to their compatibility with Köhler illuminated systems and numerical simplicity. Recently, a new method has been proposed, called multi-filter phase imaging with partially coherent light (MFPI-PC), which alleviates the inherent noise/resolution trade-off in solving the TIE by utilizing a large number of defocused bright-field images spaced equally about the focal plane. Despite greatly improving the state-ofthe- art, the method has many shortcomings including the impracticality of high-speed acquisition, inefficient sampling, and attenuated response at high frequencies due to aperture effects. In this report, we present a new method, called bright-field quantitative phase microscopy (BFQPM), which efficiently utilizes a small number of defocused bright-field images and recovers frequencies out to the partially coherent diffraction limit. The method is based on a noiseminimized inversion of a PCTF derived for each finite defocus distance. We present simulation results which indicate nanoscale optical path length sensitivity and improved performance over MFPI-PC. We also provide experimental results imaging live bovine mesenchymal stem cells at sub-second temporal resolution. In all, BFQPM enables fast and accurate phase imaging with unprecedented spatial resolution using widely available bright-field microscopy hardware.

  6. Accurate Non-parametric Estimation of Recent Effective Population Size from Segments of Identity by Descent

    PubMed Central

    Browning, Sharon R.; Browning, Brian L.

    2015-01-01

    Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365

  7. Estimation of Methanogen Biomass by Quantitation of Coenzyme M

    PubMed Central

    Elias, Dwayne A.; Krumholz, Lee R.; Tanner, Ralph S.; Suflita, Joseph M.

    1999-01-01

    Determination of the role of methanogenic bacteria in an anaerobic ecosystem often requires quantitation of the organisms. Because of the extreme oxygen sensitivity of these organisms and the inherent limitations of cultural techniques, an accurate biomass value is very difficult to obtain. We standardized a simple method for estimating methanogen biomass in a variety of environmental matrices. In this procedure we used the thiol biomarker coenzyme M (CoM) (2-mercaptoethanesulfonic acid), which is known to be present in all methanogenic bacteria. A high-performance liquid chromatography-based method for detecting thiols in pore water (A. Vairavamurthy and M. Mopper, Anal. Chim. Acta 78:363–370, 1990) was modified in order to quantify CoM in pure cultures, sediments, and sewage water samples. The identity of the CoM derivative was verified by using liquid chromatography-mass spectroscopy. The assay was linear for CoM amounts ranging from 2 to 2,000 pmol, and the detection limit was 2 pmol of CoM/ml of sample. CoM was not adsorbed to sediments. The methanogens tested contained an average of 19.5 nmol of CoM/mg of protein and 0.39 ± 0.07 fmol of CoM/cell. Environmental samples contained an average of 0.41 ± 0.17 fmol/cell based on most-probable-number estimates. CoM was extracted by using 1% tri-(N)-butylphosphine in isopropanol. More than 90% of the CoM was recovered from pure cultures and environmental samples. We observed no interference from sediments in the CoM recovery process, and the method could be completed aerobically within 3 h. Freezing sediment samples resulted in 46 to 83% decreases in the amounts of detectable CoM, whereas freezing had no effect on the amounts of CoM determined in pure cultures. The method described here provides a quick and relatively simple way to estimate methanogenic biomass. PMID:10584015

  8. Quantitative Estimation of Trace Chemicals in Industrial Effluents with the Sticklet Transform Method

    SciTech Connect

    Mehta, N C; Scharlemann, E T; Stevens, C G

    2001-04-02

    Application of a novel transform operator, the Sticklet transform, to the quantitative estimation of trace chemicals in industrial effluent plumes is reported. The sticklet transform is a superset of the well-known derivative operator and the Haar wavelet, and is characterized by independently adjustable lobe width and separation. Computer simulations demonstrate that they can make accurate and robust concentration estimates of multiple chemical species in industrial effluent plumes in the presence of strong clutter background, interferent chemicals and random noise. In this paper they address the application of the sticklet transform in estimating chemical concentrations in effluent plumes in the presence of atmospheric transmission effects. They show that this transform retains the ability to yield accurate estimates using on-plume/off-plume measurements that represent atmospheric differentials up to 10% of the full atmospheric attenuation.

  9. Simple, fast and accurate eight points amplitude estimation method of sinusoidal signals for DSP based instrumentation

    NASA Astrophysics Data System (ADS)

    Vizireanu, D. N.; Halunga, S. V.

    2012-04-01

    A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.

  10. Multiobjective optimization in quantitative structure-activity relationships: deriving accurate and interpretable QSARs.

    PubMed

    Nicolotti, Orazio; Gillet, Valerie J; Fleming, Peter J; Green, Darren V S

    2002-11-01

    Deriving quantitative structure-activity relationship (QSAR) models that are accurate, reliable, and easily interpretable is a difficult task. In this study, two new methods have been developed that aim to find useful QSAR models that represent an appropriate balance between model accuracy and complexity. Both methods are based on genetic programming (GP). The first method, referred to as genetic QSAR (or GPQSAR), uses a penalty function to control model complexity. GPQSAR is designed to derive a single linear model that represents an appropriate balance between the variance and the number of descriptors selected for the model. The second method, referred to as multiobjective genetic QSAR (MoQSAR), is based on multiobjective GP and represents a new way of thinking of QSAR. Specifically, QSAR is considered as a multiobjective optimization problem that comprises a number of competitive objectives. Typical objectives include model fitting, the total number of terms, and the occurrence of nonlinear terms. MoQSAR results in a family of equivalent QSAR models where each QSAR represents a different tradeoff in the objectives. A practical consideration often overlooked in QSAR studies is the need for the model to promote an understanding of the biochemical response under investigation. To accomplish this, chemically intuitive descriptors are needed but do not always give rise to statistically robust models. This problem is addressed by the addition of a further objective, called chemical desirability, that aims to reward models that consist of descriptors that are easily interpretable by chemists. GPQSAR and MoQSAR have been tested on various data sets including the Selwood data set and two different solubility data sets. The study demonstrates that the MoQSAR method is able to find models that are at least as good as models derived using standard statistical approaches and also yields models that allow a medicinal chemist to trade statistical robustness for chemical

  11. Quantitative calcium resistivity based method for accurate and scalable water vapor transmission rate measurement.

    PubMed

    Reese, Matthew O; Dameron, Arrelaine A; Kempe, Michael D

    2011-08-01

    The development of flexible organic light emitting diode displays and flexible thin film photovoltaic devices is dependent on the use of flexible, low-cost, optically transparent and durable barriers to moisture and/or oxygen. It is estimated that this will require high moisture barriers with water vapor transmission rates (WVTR) between 10(-4) and 10(-6) g/m(2)/day. Thus there is a need to develop a relatively fast, low-cost, and quantitative method to evaluate such low permeation rates. Here, we demonstrate a method where the resistance changes of patterned Ca films, upon reaction with moisture, enable one to calculate a WVTR between 10 and 10(-6) g/m(2)/day or better. Samples are configured with variable aperture size such that the sensitivity and/or measurement time of the experiment can be controlled. The samples are connected to a data acquisition system by means of individual signal cables permitting samples to be tested under a variety of conditions in multiple environmental chambers. An edge card connector is used to connect samples to the measurement wires enabling easy switching of samples in and out of test. This measurement method can be conducted with as little as 1 h of labor time per sample. Furthermore, multiple samples can be measured in parallel, making this an inexpensive and high volume method for measuring high moisture barriers. PMID:21895269

  12. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  13. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    PubMed

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  14. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    PubMed

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy. PMID:26605696

  15. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  16. Uncertainty estimations for quantitative in vivo MRI T1 mapping

    NASA Astrophysics Data System (ADS)

    Polders, Daniel L.; Leemans, Alexander; Luijten, Peter R.; Hoogduin, Hans

    2012-11-01

    Mapping the longitudinal relaxation time (T1) of brain tissue is of great interest for both clinical research and MRI sequence development. For an unambiguous interpretation of in vivo variations in T1 images, it is important to understand the degree of variability that is associated with the quantitative T1 parameter. This paper presents a general framework for estimating the uncertainty in quantitative T1 mapping by combining a slice-shifted multi-slice inversion recovery EPI technique with the statistical wild-bootstrap approach. Both simulations and experimental analyses were performed to validate this novel approach and to evaluate the estimated T1 uncertainty in several brain regions across four healthy volunteers. By estimating the T1 uncertainty, it is shown that the variation in T1 within anatomic regions for similar tissue types is larger than the uncertainty in the measurement. This indicates that heterogeneity of the inspected tissue and/or partial volume effects can be the main determinants for the observed variability in the estimated T1 values. The proposed approach to estimate T1 and its uncertainty without the need for repeated measurements may also prove to be useful for calculating effect sizes that are deemed significant when comparing group differences.

  17. Uncertainty estimations for quantitative in vivo MRI T1 mapping.

    PubMed

    Polders, Daniel L; Leemans, Alexander; Luijten, Peter R; Hoogduin, Hans

    2012-11-01

    Mapping the longitudinal relaxation time (T(1)) of brain tissue is of great interest for both clinical research and MRI sequence development. For an unambiguous interpretation of in vivo variations in T(1) images, it is important to understand the degree of variability that is associated with the quantitative T(1) parameter. This paper presents a general framework for estimating the uncertainty in quantitative T(1) mapping by combining a slice-shifted multi-slice inversion recovery EPI technique with the statistical wild-bootstrap approach. Both simulations and experimental analyses were performed to validate this novel approach and to evaluate the estimated T(1) uncertainty in several brain regions across four healthy volunteers. By estimating the T(1) uncertainty, it is shown that the variation in T(1) within anatomic regions for similar tissue types is larger than the uncertainty in the measurement. This indicates that heterogeneity of the inspected tissue and/or partial volume effects can be the main determinants for the observed variability in the estimated T(1) values. The proposed approach to estimate T(1) and its uncertainty without the need for repeated measurements may also prove to be useful for calculating effect sizes that are deemed significant when comparing group differences. PMID:23041796

  18. Bayesian parameter estimation in spectral quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja

    2016-03-01

    Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.

  19. Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.

    PubMed

    Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide

    2003-03-15

    Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks. PMID:12680675

  20. Quantitative volumetric breast density estimation using phase contrast mammography

    NASA Astrophysics Data System (ADS)

    Wang, Zhentian; Hauser, Nik; Kubik-Huch, Rahel A.; D'Isidoro, Fabio; Stampanoni, Marco

    2015-05-01

    Phase contrast mammography using a grating interferometer is an emerging technology for breast imaging. It provides complementary information to the conventional absorption-based methods. Additional diagnostic values could be further obtained by retrieving quantitative information from the three physical signals (absorption, differential phase and small-angle scattering) yielded simultaneously. We report a non-parametric quantitative volumetric breast density estimation method by exploiting the ratio (dubbed the R value) of the absorption signal to the small-angle scattering signal. The R value is used to determine breast composition and the volumetric breast density (VBD) of the whole breast is obtained analytically by deducing the relationship between the R value and the pixel-wise breast density. The proposed method is tested by a phantom study and a group of 27 mastectomy samples. In the clinical evaluation, the estimated VBD values from both cranio-caudal (CC) and anterior-posterior (AP) views are compared with the ACR scores given by radiologists to the pre-surgical mammograms. The results show that the estimated VBD results using the proposed method are consistent with the pre-surgical ACR scores, indicating the effectiveness of this method in breast density estimation. A positive correlation is found between the estimated VBD and the diagnostic ACR score for both the CC view (p=0.033 ) and AP view (p=0.001 ). A linear regression between the results of the CC view and AP view showed a correlation coefficient γ = 0.77, which indicates the robustness of the proposed method and the quantitative character of the additional information obtained with our approach.

  1. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  2. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  3. Quantitative criteria for estimation of natural and artificial ecosystems functioning

    NASA Astrophysics Data System (ADS)

    Pechurkin, N. S.

    It's necessary to have quantitative criteria to elaborate efficient artificial ecosystems (AES). As usual, these criteria are connected with exact objectives of AES designed. For example, if AES is considered for use in space, it's to be efficient in its use of mass, power, volume (size) and human labor and to be reliable. Another task arises: how to determine, to find out the quantitative criteria for natural ecosystems functioning. To solve the problem, it's fruitful to use a hierarchical approach suitable for both individual links and ecosystem as a whole. Energy fluxes criteria (principles) were developed to estimate the functional activities of biosystems at populations, communities and ecosystems levels. Major feature of ecosystem as a whole is a biotic turnover of matter the rate of which is restricted by the lack of limiting substances. Obviously, the most generalized criterion is to take into account: energy fluxes used by the biosystem and quantity of limiting substance included into its turnover. The use (utilization) of energy fluxes by ecosystem can be easily measured as a net primary production of photosynthesis (NPP). So, the ratio of NPP to the total mass of limiting substrate can serve as a main universal criterion (MUC). This MUC characterizes the specific cycling rate of limiting chemical elements in the system and effectiveness of every ecosystem functioning, including Biosphere. Comparative analysis and elaboration of quantitative criteria for estimation of natural and artificial ecosystems activities is of high importance both for theoretical considerations and for real applications.

  4. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  5. Easy and accurate variance estimation of the nonparametric estimator of the partial area under the ROC curve and its application.

    PubMed

    Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D

    2016-06-15

    The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26790540

  6. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  7. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  8. Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar

    NASA Astrophysics Data System (ADS)

    Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru

    Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.

  9. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  10. Mass Spectrometry Provides Accurate and Sensitive Quantitation of A2E

    PubMed Central

    Gutierrez, Danielle B.; Blakeley, Lorie; Goletz, Patrice W.; Schey, Kevin L.; Hanneken, Anne; Koutalos, Yiannis; Crouch, Rosalie K.; Ablonczy, Zsolt

    2010-01-01

    Summary Orange autofluorescence from lipofuscin in the lysosomes of the retinal pigment epithelium (RPE) is a hallmark of aging in the eye. One of the major components of lipofuscin is A2E, the levels of which increase with age and in pathologic conditions, such as Stargardt disease or age-related macular degeneration. In vitro studies have suggested that A2E is highly phototoxic and, more specifically, that A2E and its oxidized derivatives contribute to RPE damage and subsequent photoreceptor cell death. To date, absorption spectroscopy has been the primary method to identify and quantitate A2E. Here, a new mass spectrometric method was developed for the specific detection of low levels of A2E and compared to a traditional method of analysis. The new mass spectrometry method allows the detection and quantitation of approximately 10,000-fold less A2E than absorption spectroscopy and the detection and quantitation of low levels of oxidized A2E, with localization of the oxidation sites. This study suggests that identification and quantitation of A2E from tissue extracts by chromatographic absorption spectroscopyoverestimates the amount of A2E. This mass spectrometry approach makes it possible to detect low levels of A2E and its oxidized metabolites with greater accuracy than traditional methods, thereby facilitating a more exact analysis of bis-retinoids in animal models of inherited retinal degeneration as well as in normal and diseased human eyes. PMID:20931136

  11. Probabilistic Quantitative Precipitation Estimates with Ground-based Radar Networks

    NASA Astrophysics Data System (ADS)

    Kirstetter, Pierre-Emmanuel; Gourley, Jonathan; Hong, Yang; Zhang, Jian; Moazamigoodarzi, Saber; Langston, Carrie; Arthur, Ami

    2015-04-01

    The uncertainty structure of radar quantitative precipitation estimation (QPE) is largely unknown at fine spatiotemporal scales near the radar measurement scale (1-km/5-min). By using the WSR-88D radar network and rain gauge datasets across the conterminous US, an investigation of this subject has been carried out within the framework of the NOAA/NSSL ground radar-based Multi-Radar Multi-Sensor. Probability distributions of precipitation rates are computed instead of deterministic values using a model quantifying the relation between radar reflectivity and the corresponding "true" precipitation. The probabilistic model considers multiple sources of error in radar QPE as well as the impacts of correction algorithms on the radar signal. Ensembles of reflectivity-to-rain rate relationships accounting explicitly for rain typology were derived at a 5-min/1-km scale. This approach preserves the fine space/time sampling properties of the radar and conditions probabilistic QPE on the rain rate and precipitation type when computing probabilistic quantitative precipitation estimates (PQPE). The model components were estimated on the basis of a 1-year-long data sample. This PQPE model provides the basis for precipitation probability maps and the generation of radar precipitation ensembles. Maps of the precipitation exceedance probability for specific thresholds (e.g. precipitation return periods) are demonstrated. Precipitation probability maps are accumulated to the hourly time scale and compare positively to the deterministic QPE. This approach to PQPE can readily apply to other systems including space-based passive and active sensor algorithms.

  12. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  13. Quantitative criteria for estimation of natural and artificial ecosystems functioning

    NASA Astrophysics Data System (ADS)

    Pechurkin, N. S.

    Using biotic turnover of substances in trophic chains, natural and artificial ecosystems are similar in functioning, but different in structure. It is necessary to have quantitative criteria to evaluate the efficiency of artificial ecosystems (AES). These criteria are dependent on the specific objectives for which the AES are designed. For example, if AES is considered for use in space, important criteria are efficiency in use of mass, power, volume (size) and human labor and reliability. Another task involves the determination of quantitative criteria for the functioning of natural ecosystems. To solve the problem, it is fruitful to use a hierarchical approach suitable for both individual links and the ecosystem as a whole. Energy flux criteria (principles) were developed to estimate the functional activities of biosystems at the population, community and ecosystem levels. A major feature of ecosystems as a whole is their biotic turnover of matter the rate of which is restricted by the lack of limiting substances. Obviously, the most generalized criterion is to take into account the energy flux used by the biosystem and the quantity of limiting substance included in its turnover. The use of energy flux by ecosystem, EUSED - is determined from the photoassimilation of CO 2 by plants (per time unit). It can be approximately estimated as the net primary production of photosynthesis (NPP). So, the ratio of CO 2 photoassimilation rate (sometimes, measured as NPP) to the total mass of limiting substrate can serve as a main universal criterion (MUC). This MUC characterizes the specific cycling rate of limiting chemical elements in the system and effectiveness of every ecosystem including the global Biosphere. Comparative analysis and elaboration of quantitative criteria for estimation of natural and artificial ecosystems activities is of high importance both for theoretical considerations and for real applications.

  14. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  15. Quantitative estimation of poikilocytosis by the coherent optical method

    NASA Astrophysics Data System (ADS)

    Safonova, Larisa P.; Samorodov, Andrey V.; Spiridonov, Igor N.

    2000-05-01

    The investigation upon the necessity and the reliability required of the determination of the poikilocytosis in hematology has shown that existing techniques suffer from grave shortcomings. To determine a deviation of the erythrocytes' form from the normal (rounded) one in blood smears it is expedient to use an integrative estimate. The algorithm which is based on the correlation between erythrocyte morphological parameters with properties of the spatial-frequency spectrum of blood smear is suggested. During analytical and experimental research an integrative form parameter (IFP) which characterizes the increase of the relative concentration of cells with the changed form over 5% and the predominating type of poikilocytes was suggested. An algorithm of statistically reliable estimation of the IFP on the standard stained blood smears has been developed. To provide the quantitative characterization of the morphological features of cells a form vector has been proposed, and its validity for poikilocytes differentiation was shown.

  16. Regularization Based Iterative Point Match Weighting for Accurate Rigid Transformation Estimation.

    PubMed

    Liu, Yonghuai; De Dominicis, Luigi; Wei, Baogang; Chen, Liang; Martin, Ralph R

    2015-09-01

    Feature extraction and matching (FEM) for 3D shapes finds numerous applications in computer graphics and vision for object modeling, retrieval, morphing, and recognition. However, unavoidable incorrect matches lead to inaccurate estimation of the transformation relating different datasets. Inspired by AdaBoost, this paper proposes a novel iterative re-weighting method to tackle the challenging problem of evaluating point matches established by typical FEM methods. Weights are used to indicate the degree of belief that each point match is correct. Our method has three key steps: (i) estimation of the underlying transformation using weighted least squares, (ii) penalty parameter estimation via minimization of the weighted variance of the matching errors, and (iii) weight re-estimation taking into account both matching errors and information learnt in previous iterations. A comparative study, based on real shapes captured by two laser scanners, shows that the proposed method outperforms four other state-of-the-art methods in terms of evaluating point matches between overlapping shapes established by two typical FEM methods, resulting in more accurate estimates of the underlying transformation. This improved transformation can be used to better initialize the iterative closest point algorithm and its variants, making 3D shape registration more likely to succeed. PMID:26357287

  17. The accurate estimation of physicochemical properties of ternary mixtures containing ionic liquids via artificial neural networks.

    PubMed

    Cancilla, John C; Díaz-Rodríguez, Pablo; Matute, Gemma; Torrecilla, José S

    2015-02-14

    The estimation of the density and refractive index of ternary mixtures comprising the ionic liquid (IL) 1-butyl-3-methylimidazolium tetrafluoroborate, 2-propanol, and water at a fixed temperature of 298.15 K has been attempted through artificial neural networks. The obtained results indicate that the selection of this mathematical approach was a well-suited option. The mean prediction errors obtained, after simulating with a dataset never involved in the training process of the model, were 0.050% and 0.227% for refractive index and density estimation, respectively. These accurate results, which have been attained only using the composition of the dissolutions (mass fractions), imply that, most likely, ternary mixtures similar to the one analyzed, can be easily evaluated utilizing this algorithmic tool. In addition, different chemical processes involving ILs can be monitored precisely, and furthermore, the purity of the compounds in the studied mixtures can be indirectly assessed thanks to the high accuracy of the model. PMID:25583241

  18. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-01

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397

  19. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  20. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    NASA Astrophysics Data System (ADS)

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-01

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1-2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S0 and A0, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A0 to thickness variations was shown to be superior to S0, however, the attenuation from A0 when a liquid loading was present was much higher than S0. A0 was less sensitive to the presence of coatings on the surface of than S0.

  1. How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?

    PubMed Central

    Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.

    2010-01-01

    We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774

  2. Quantitative precipitation estimation by merging multiple precipitation products using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Chiang, Y.; Tsai, M.; Chang, F.

    2010-12-01

    Simulation of extreme rainfall-runoff events is the key issue for flood mitigation. The accuracy of flood forecasting driven from models is usually dependent on whether the upstream precipitation information is sufficient or not. In the past, such information was provided by ground measurements. However, remote sensing data such as radar and satellite images have been widely applied to precipitation estimation in recent years. The development of remotely sensed technology enables researchers to realize the spatial distribution of rainfall. As far as quantitative precipitation estimation is concerned, remote sensing data provide more useful information than ground measurements. It will have potential advantage of reducing the flood risk if ground observations and radar and satellite estimations can be appropriately integrated. Therefore, we first analyze the long-term variation and the correlation between observations and different products by statistical methods in this study. Secondly, the observational/ estimated errors of different precipitation sources are investigated and the biases of each precipitation products are removed by artificial neural networks. Finally, accurate quantitative precipitation estimation can be built by integrating different precipitation products.

  3. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    PubMed

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474

  4. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    PubMed Central

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  5. Hybridization modeling of oligonucleotide SNP arrays for accurate DNA copy number estimation

    PubMed Central

    Wan, Lin; Sun, Kelian; Ding, Qi; Cui, Yuehua; Li, Ming; Wen, Yalu; Elston, Robert C.; Qian, Minping; Fu, Wenjiang J

    2009-01-01

    Affymetrix SNP arrays have been widely used for single-nucleotide polymorphism (SNP) genotype calling and DNA copy number variation inference. Although numerous methods have achieved high accuracy in these fields, most studies have paid little attention to the modeling of hybridization of probes to off-target allele sequences, which can affect the accuracy greatly. In this study, we address this issue and demonstrate that hybridization with mismatch nucleotides (HWMMN) occurs in all SNP probe-sets and has a critical effect on the estimation of allelic concentrations (ACs). We study sequence binding through binding free energy and then binding affinity, and develop a probe intensity composite representation (PICR) model. The PICR model allows the estimation of ACs at a given SNP through statistical regression. Furthermore, we demonstrate with cell-line data of known true copy numbers that the PICR model can achieve reasonable accuracy in copy number estimation at a single SNP locus, by using the ratio of the estimated AC of each sample to that of the reference sample, and can reveal subtle genotype structure of SNPs at abnormal loci. We also demonstrate with HapMap data that the PICR model yields accurate SNP genotype calls consistently across samples, laboratories and even across array platforms. PMID:19586935

  6. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets.

    PubMed

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  7. Study on the performance evaluation of quantitative precipitation estimation and quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yang, H.; Chang, K.; Suk, M.; cha, J.; Choi, Y.

    2011-12-01

    Rainfall estimation and short-term (several hours) quantitative prediction of precipitation based on meteorological radar data is one of the intensely studied topics. The Korea Peninsula has the horizontally narrow land area and complex topography with many of mountains, and so it has the characteristics that the rainfall system changes in many cases. Quantitative precipitation estimation (QPE) and quantitative precipitation forecasts (QPF) are the crucial information for severe weather or water management. We have been conducted the performance evaluation of QPE/QPF of Korea Meteorological Administration (KMA), which is the first step for optimizing QPE/QPF system in South Korea. The real-time adjusted RAR (Radar-AWS-Rainrate) system gives better agreement with the observed rain-rate than that of the fixed Z-R relation, and the additional bias correction of RAR yields the slightly better results. A correlation coefficient of R2 = 0.84 is obtained between the daily accumulated observed and RAR estimated rainfall. The RAR will be available for the hydrological applications such as the water budget. The VSRF (Very Short Range Forecast) shows better performance than the MAPLE (McGill Algorithm for Precipitation Nowcasting by Lagrangian) within 40 minutes, but the MAPLE better than the VSRF after 40 minutes. In case of hourly forecast, MAPLE shows better performance than the VSRF. QPE and QPF are thought to be meaningful for the nowcasting (1~2 hours) except the model forecast. The long-term forecast longer than 3 hours by meteorological model is especially meaningful for such as water management.

  8. Handling uncertainty in quantitative estimates in integrated resource planning

    SciTech Connect

    Tonn, B.E.; Wagner, C.G.

    1995-01-01

    This report addresses uncertainty in Integrated Resource Planning (IRP). IRP is a planning and decisionmaking process employed by utilities, usually at the behest of Public Utility Commissions (PUCs), to develop plans to ensure that utilities have resources necessary to meet consumer demand at reasonable cost. IRP has been used to assist utilities in developing plans that include not only traditional electricity supply options but also demand-side management (DSM) options. Uncertainty is a major issue for IRP. Future values for numerous important variables (e.g., future fuel prices, future electricity demand, stringency of future environmental regulations) cannot ever be known with certainty. Many economically significant decisions are so unique that statistically-based probabilities cannot even be calculated. The entire utility strategic planning process, including IRP, encompasses different types of decisions that are made with different time horizons and at different points in time. Because of fundamental pressures for change in the industry, including competition in generation, gone is the time when utilities could easily predict increases in demand, enjoy long lead times to bring on new capacity, and bank on steady profits. The purpose of this report is to address in detail one aspect of uncertainty in IRP: Dealing with Uncertainty in Quantitative Estimates, such as the future demand for electricity or the cost to produce a mega-watt (MW) of power. A theme which runs throughout the report is that every effort must be made to honestly represent what is known about a variable that can be used to estimate its value, what cannot be known, and what is not known due to operational constraints. Applying this philosophy to the representation of uncertainty in quantitative estimates, it is argued that imprecise probabilities are superior to classical probabilities for IRP.

  9. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three

  10. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  11. High-Resolution Tsunami Inundation Simulations Based on Accurate Estimations of Coastal Waveforms

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.; Furumura, T.

    2015-12-01

    We evaluate the accuracy of high-resolution tsunami inundation simulations in detail using the actual observational data of the 2011 Tohoku-Oki earthquake (Mw9.0) and investigate the methodologies to improve the simulation accuracy.Due to the recent development of parallel computing technologies, high-resolution tsunami inundation simulations are conducted more commonly than before. To evaluate how accurately these simulations can reproduce inundation processes, we test several types of simulation configurations on a parallel computer, where we can utilize the observational data (e.g., offshore and coastal waveforms and inundation properties) that are recorded during the Tohoku-Oki earthquake.Before discussing the accuracy of inundation processes on land, the incident waves at coastal sites must be accurately estimated. However, for megathrust earthquakes, it is difficult to find the tsunami source that can provide accurate estimations of tsunami waveforms at every coastal site because of the complex spatiotemporal distribution of the source and the limitation of observation. To overcome this issue, we employ a site-specific source inversion approach that increases the estimation accuracy within a specific coastal site by applying appropriate weighting to the observational data in the inversion process.We applied our source inversion technique to the Tohoku tsunami and conducted inundation simulations using 5-m resolution digital elevation model data (DEM) for the coastal area around Miyako Bay and Sendai Bay. The estimated waveforms at the coastal wave gauges of these bays successfully agree with the observed waveforms. However, the simulations overestimate the inundation extent indicating the necessity to improve the inundation model. We find that the value of Manning's roughness coefficient should be modified from the often-used value of n = 0.025 to n = 0.033 to obtain proper results at both cities.In this presentation, the simulation results with several

  12. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    NASA Astrophysics Data System (ADS)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  13. Quantitative spectroscopy of hot stars: accurate atomic data applied on a large scale as driver of recent breakthroughs

    NASA Astrophysics Data System (ADS)

    Przybilla, Norbert; Schaffenroth, Veronika; Nieva, Maria-Fernanda

    2015-08-01

    OB-type stars present hotbeds for non-LTE physics because of their strong radiation fields that drive the atmospheric plasma out of local thermodynamic equilibrium. We report on recent breakthroughs in the quantitative analysis of the optical and UV-spectra of OB-type stars that were facilitated by application of accurate and precise atomic data on a large scale. An astophysicist's dream has come true, by bringing observed and model spectra into close match over wide parts of the observed wavelength ranges. This facilitates tight observational constraints to be derived from OB-type stars for wide applications in astrophysics. However, despite the progress made, many details of the modelling may be improved further. We discuss atomic data needs in terms of laboratory measurements and also ab-initio calculations. Particular emphasis is given to quantitative spectroscopy in the near-IR, which will be in focus in the era of the upcoming extremely large telescopes.

  14. There's plenty of gloom at the bottom: the many challenges of accurate quantitation in size-based oligomeric separations.

    PubMed

    Striegel, André M

    2013-11-01

    There is a variety of small-molecule species (e.g., tackifiers, plasticizers, oligosaccharides) the size-based characterization of which is of considerable scientific and industrial importance. Likewise, quantitation of the amount of oligomers in a polymer sample is crucial for the import and export of substances into the USA and European Union (EU). While the characterization of ultra-high molar mass macromolecules by size-based separation techniques is generally considered a challenge, it is this author's contention that a greater challenge is encountered when trying to perform, for quantitation purposes, separations in and of the oligomeric region. The latter thesis is expounded herein, by detailing the various obstacles encountered en route to accurate, quantitative oligomeric separations by entropically dominated techniques such as size-exclusion chromatography, hydrodynamic chromatography, and asymmetric flow field-flow fractionation, as well as by methods which are, principally, enthalpically driven such as liquid adsorption and temperature gradient interaction chromatography. These obstacles include, among others, the diminished sensitivity of static light scattering (SLS) detection at low molar masses, the non-constancy of the response of SLS and of commonly employed concentration-sensitive detectors across the oligomeric region, and the loss of oligomers through the accumulation wall membrane in asymmetric flow field-flow fractionation. The battle is not lost, however, because, with some care and given a sufficient supply of sample, the quantitation of both individual oligomeric species and of the total oligomeric region is often possible. PMID:23887277

  15. Restriction Site Tiling Analysis: accurate discovery and quantitative genotyping of genome-wide polymorphisms using nucleotide arrays

    PubMed Central

    2010-01-01

    High-throughput genotype data can be used to identify genes important for local adaptation in wild populations, phenotypes in lab stocks, or disease-related traits in human medicine. Here we advance microarray-based genotyping for population genomics with Restriction Site Tiling Analysis. The approach simultaneously discovers polymorphisms and provides quantitative genotype data at 10,000s of loci. It is highly accurate and free from ascertainment bias. We apply the approach to uncover genomic differentiation in the purple sea urchin. PMID:20403197

  16. Accurate estimation of sea surface temperatures using dissolution-corrected calibrations for Mg/Ca paleothermometry

    NASA Astrophysics Data System (ADS)

    Rosenthal, Yair; Lohmann, George P.

    2002-09-01

    Paired δ18O and Mg/Ca measurements on the same foraminiferal shells offer the ability to independently estimate sea surface temperature (SST) changes and assess their temporal relationship to the growth and decay of continental ice sheets. The accuracy of this method is confounded, however, by the absence of a quantitative method to correct Mg/Ca records for alteration by dissolution. Here we describe dissolution-corrected calibrations for Mg/Ca-paleothermometry in which the preexponent constant is a function of size-normalized shell weight: (1) for G. ruber (212-300 μm) (Mg/Ca)ruber = (0.025 wt + 0.11) e0.095T and (b) for G. sacculifer (355-425 μm) (Mg/Ca)sacc = (0.0032 wt + 0.181) e0.095T. The new calibrations improve the accuracy of SST estimates and are globally applicable. With this correction, eastern equatorial Atlantic SST during the Last Glacial Maximum is estimated to be 2.9° ± 0.4°C colder than today.

  17. Highly accurate thermal flow microsensor for continuous and quantitative measurement of cerebral blood flow.

    PubMed

    Li, Chunyan; Wu, Pei-ming; Wu, Zhizhen; Limnuson, Kanokwan; Mehan, Neal; Mozayan, Cameron; Golanov, Eugene V; Ahn, Chong H; Hartings, Jed A; Narayan, Raj K

    2015-10-01

    Cerebral blood flow (CBF) plays a critical role in the exchange of nutrients and metabolites at the capillary level and is tightly regulated to meet the metabolic demands of the brain. After major brain injuries, CBF normally decreases and supporting the injured brain with adequate CBF is a mainstay of therapy after traumatic brain injury. Quantitative and localized measurement of CBF is therefore critically important for evaluation of treatment efficacy and also for understanding of cerebral pathophysiology. We present here an improved thermal flow microsensor and its operation which provides higher accuracy compared to existing devices. The flow microsensor consists of three components, two stacked-up thin film resistive elements serving as composite heater/temperature sensor and one remote resistive element for environmental temperature compensation. It operates in constant-temperature mode (~2 °C above the medium temperature) providing 20 ms temporal resolution. Compared to previous thermal flow microsensor based on self-heating and self-sensing design, the sensor presented provides at least two-fold improvement in accuracy in the range from 0 to 200 ml/100 g/min. This is mainly achieved by using the stacked-up structure, where the heating and sensing are separated to improve the temperature measurement accuracy by minimization of errors introduced by self-heating. PMID:26256480

  18. Accurate estimation of the RMS emittance from single current amplifier data

    SciTech Connect

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-05-31

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.

  19. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  20. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data.

    PubMed

    Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A

    2016-05-01

    The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available. PMID:27013261

  1. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method. PMID:23893759

  2. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping.

    PubMed

    Lee, Han B; Schwab, Tanya L; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L; Cervera, Roberto Lopez; McNulty, Melissa S; Bostwick, Hannah S; Clark, Karl J

    2016-06-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98-100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  3. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping

    PubMed Central

    Lee, Han B.; Schwab, Tanya L.; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L.; Cervera, Roberto Lopez; McNulty, Melissa S.; Bostwick, Hannah S.; Clark, Karl J.

    2016-01-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98–100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  4. Quick and accurate estimation of the elastic constants using the minimum image method

    NASA Astrophysics Data System (ADS)

    Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.

    2015-04-01

    A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.

  5. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    PubMed

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  6. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  7. Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.

    2014-12-01

    The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering

  8. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  9. Probabilistic quantitative precipitation estimation using dual polarization radar measurements

    NASA Astrophysics Data System (ADS)

    Lim, S.; Noh, S.; Lee, D.

    2013-12-01

    Weather radars have become a popular tool for meteorological applications such as quantitative precipitation estimation (QPE) with high spatiotemporal resolution. Especially, in the last decade, QPE performance has been improved by introduction of polarimetric technology. However, QPEs using dual polarization radar data are still subject to uncertainties resulted in rainfall conversion relationships, combination methods of different parameters, and sampling errors. Deterministic QPE typically based on decision tree method ignores such uncertainties which exacerbate performance in hydrologic flood forecasting. Probabilistic precipitation models provide an alternative framework for QPE to understand temporal and spatial variations of uncertainty. In this study, we propose a probabilistic QPE method from dual polarization radar measurements via data assimilation. The proposed method utilizes QPE ensembles based on different parameters of a polarimetric radar considering uncertainty of conversion equations and rainfall parameters. Ground observations are assimilated with QPE ensembles at each measurement time step. Rejection sampling based on Bayesian filtering is implemented to estimate posterior distribution of QPE and compare multiple models. The strength of the proposed method is that it can improve accuracy of QPE compared to deterministic QPE, identify uncertainty of QPE, and provide sound spatial precipitation fields including error structure, which is essential for hydrological data assimilation to improve flood forecasting. The real experiments are implemented to demonstrate applicability of this method using S-band dual polarization radar located in Mt. Biseul, Korea. The discussion will be focused on analysis of multi-model selection results by Bayesian filtering and comparison of accuracy between deterministic and probabilistic QPE methods.

  10. Development and Validation of a Highly Accurate Quantitative Real-Time PCR Assay for Diagnosis of Bacterial Vaginosis.

    PubMed

    Hilbert, David W; Smith, William L; Chadwick, Sean G; Toner, Geoffrey; Mordechai, Eli; Adelson, Martin E; Aguin, Tina J; Sobel, Jack D; Gygax, Scott E

    2016-04-01

    Bacterial vaginosis (BV) is the most common gynecological infection in the United States. Diagnosis based on Amsel's criteria can be challenging and can be aided by laboratory-based testing. A standard method for diagnosis in research studies is enumeration of bacterial morphotypes of a Gram-stained vaginal smear (i.e., Nugent scoring). However, this technique is subjective, requires specialized training, and is not widely available. Therefore, a highly accurate molecular assay for the diagnosis of BV would be of great utility. We analyzed 385 vaginal specimens collected prospectively from subjects who were evaluated for BV by clinical signs and Nugent scoring. We analyzed quantitative real-time PCR (qPCR) assays on DNA extracted from these specimens to quantify nine organisms associated with vaginal health or disease:Gardnerella vaginalis,Atopobium vaginae, BV-associated bacteria 2 (BVAB2, an uncultured member of the orderClostridiales),Megasphaeraphylotype 1 or 2,Lactobacillus iners,Lactobacillus crispatus,Lactobacillus gasseri, andLactobacillus jensenii We generated a logistic regression model that identifiedG. vaginalis,A. vaginae, andMegasphaeraphylotypes 1 and 2 as the organisms for which quantification provided the most accurate diagnosis of symptomatic BV, as defined by Amsel's criteria and Nugent scoring, with 92% sensitivity, 95% specificity, 94% positive predictive value, and 94% negative predictive value. The inclusion ofLactobacillusspp. did not contribute sufficiently to the quantitative model for symptomatic BV detection. This molecular assay is a highly accurate laboratory tool to assist in the diagnosis of symptomatic BV. PMID:26818677

  11. Can student health professionals accurately estimate alcohol content in commonly occurring drinks?

    PubMed Central

    Sinclair, Julia; Searle, Emma

    2016-01-01

    Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344

  12. Ultrasound Fetal Weight Estimation: How Accurate Are We Now Under Emergency Conditions?

    PubMed

    Dimassi, Kaouther; Douik, Fatma; Ajroudi, Mariem; Triki, Amel; Gara, Mohamed Faouzi

    2015-10-01

    The primary aim of this study was to evaluate the accuracy of sonographic estimation of fetal weight when performed at due date by first-line sonographers. This was a prospective study including 500 singleton pregnancies. Ultrasound examinations were performed by residents on delivery day. Estimated fetal weights (EFWs) were calculated and compared with the corresponding birth weights. The median absolute difference between EFW and birth weight was 200 g (100-330). This difference was within ±10% in 75.2% of the cases. The median absolute percentage error was 5.53% (2.70%-10.03%). Linear regression analysis revealed a good correlation between EFW and birth weight (r = 0.79, p < 0.0001). According to Bland-Altman analysis, bias was -85.06 g (95% limits of agreement: -663.33 to 494.21). In conclusion, EFWs calculated by residents were as accurate as those calculated by experienced sonographers. Nevertheless, predictive performance remains limited, with a low sensitivity in the diagnosis of macrosomia. PMID:26164286

  13. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    NASA Astrophysics Data System (ADS)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  14. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    PubMed

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982

  15. Quantitative estimation of source complexity in tsunami-source inversion

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.

    2016-04-01

    This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better

  16. Wavelet prism decomposition analysis applied to CARS spectroscopy: a tool for accurate and quantitative extraction of resonant vibrational responses.

    PubMed

    Kan, Yelena; Lensu, Lasse; Hehl, Gregor; Volkmer, Andreas; Vartiainen, Erik M

    2016-05-30

    We propose an approach, based on wavelet prism decomposition analysis, for correcting experimental artefacts in a coherent anti-Stokes Raman scattering (CARS) spectrum. This method allows estimating and eliminating a slowly varying modulation error function in the measured normalized CARS spectrum and yields a corrected CARS line-shape. The main advantage of the approach is that the spectral phase and amplitude corrections are avoided in the retrieved Raman line-shape spectrum, thus significantly simplifying the quantitative reconstruction of the sample's Raman response from a normalized CARS spectrum in the presence of experimental artefacts. Moreover, the approach obviates the need for assumptions about the modulation error distribution and the chemical composition of the specimens under study. The method is quantitatively validated on normalized CARS spectra recorded for equimolar aqueous solutions of D-fructose, D-glucose, and their disaccharide combination sucrose. PMID:27410113

  17. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples

    PubMed Central

    Mackie, David M.; Jahnke, Justin P.; Benyamin, Marcus S.; Sumner, James J.

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users’ purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells. PMID:26977411

  18. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  19. Comparative Application of PLS and PCR Methods to Simultaneous Quantitative Estimation and Simultaneous Dissolution Test of Zidovudine - Lamivudine Tablets.

    PubMed

    Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli

    2015-01-01

    In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs. PMID:26085428

  20. Effect of Volume-of-Interest Misregistration on Quantitative Planar Activity and Dose Estimation

    PubMed Central

    Song, N.; He, B.; Frey, E. C.

    2010-01-01

    In targeted radionuclide therapy (TRT), dose estimation is essential for treatment planning and tumor dose response studies. Dose estimates are typically based on a time series of whole body conjugate view planar or SPECT scans of the patient acquired after administration of a planning dose. Quantifying the activity in the organs from these studies is an essential part of dose estimation. The Quantitative Planar (QPlanar) processing method involves accurate compensation for image degrading factors and correction for organ and background overlap via the combination of computational models of the image formation process and 3D volumes of interest defining the organs to be quantified. When the organ VOIs are accurately defined, the method intrinsically compensates for attenuation, scatter, and partial volume effects, as well as overlap with other organs and the background. However, alignment between the 3D organ volume of interest (VOIs) used in QPlanar processing and the true organ projections in the planar images is required. The goal of this research was to study the effects of VOI misregistration on the accuracy and precision of organ activity estimates obtained using the QPlanar method. In this work, we modeled the degree of residual misregistration that would be expected after an automated registration procedure by randomly misaligning 3D SPECT/CT images, from which the VOI information was derived, and planar images. Mutual information based image registration was used to align the realistic simulated 3D SPECT images with the 2D planar images. The residual image misregistration was used to simulate realistic levels of misregistration and allow investigation of the effects of misregistration on the accuracy and precision of the QPlanar method. We observed that accurate registration is especially important for small organs or ones with low activity concentrations compared to neighboring organs. In addition, residual misregistration gave rise to a loss of precision

  1. A method for quantitatively estimating diffuse and discrete hydrothermal discharge

    NASA Astrophysics Data System (ADS)

    Baker, Edward T.; Massoth, Gary J.; Walker, Sharon L.; Embley, Robert W.

    1993-07-01

    Submarine hydrothermal fluids discharge as undiluted, high-temperature jets and as diffuse, highly diluted, low-temperature percolation. Estimates of the relative contribution of each discharge type, which are important for the accurate determination of local and global hydrothermal budgets, are difficult to obtain directly. In this paper we describe a new method of using measurements of hydrothermal tracers such as Fe/Mn, Fe/heat, and Mn/heat in high-temperature fluids, low-temperature fluids, and the neutrally buoyant plume to deduce the relative contribution of each discharge type. We sampled vent fluids from the north Cleft vent field on the Juan de Fuca Ridge in 1988, 1989 and 1991, and plume samples every year from 1986 to 1991. The tracers were, on average, 3 to 90 times greater in high-temperature than in low-temperature fluids, with plume values intermediate. A mixing model calculates that high-temperature fluids contribute only ˜ 3% of the fluid mass flux but > 90% of the hydrothermal Fe and > 60% of the hydrothermal Mn to the overlying plume. Three years of extensive camera-CTD sled tows through the vent field show that diffuse venting is restricted to a narrow fissure zone extending for 18 km along the axial strike. Linear plume theory applied to the temperature plumes detected when the sled crossed this zone yields a maximum likelihood estimate for the diffuse heat flux of8.9 × 10 4 W/m, for a total flux of 534 MW, considering that diffuse venting is active along only one-third of the fissure system. For mean low- and high-temperature discharge of 25°C and 319°C, respectively, the discrete heat flux must be 266 MW to satisfy the mass flux partitioning. If the north Cleft vent field is globally representative, the assumption that high-temperature discharge dominates the mass flux in axial vent fields leads to an overestimation of the flux of many non-conservative hydrothermal species by about an order of magnitude.

  2. Skin Temperature Over the Carotid Artery, an Accurate Non-invasive Estimation of Near Core Temperature

    PubMed Central

    Imani, Farsad; Karimi Rouzbahani, Hamid Reza; Goudarzi, Mehrdad; Tarrahi, Mohammad Javad; Ebrahim Soltani, Alireza

    2016-01-01

    Background: During anesthesia, continuous body temperature monitoring is essential, especially in children. Anesthesia can increase the risk of loss of body temperature by three to four times. Hypothermia in children results in increased morbidity and mortality. Since the measurement points of the core body temperature are not easily accessible, near core sites, like rectum, are used. Objectives: The purpose of this study was to measure skin temperature over the carotid artery and compare it with the rectum temperature, in order to propose a model for accurate estimation of near core body temperature. Patients and Methods: Totally, 124 patients within the age range of 2 - 6 years, undergoing elective surgery, were selected. Temperature of rectum and skin over the carotid artery was measured. Then, the patients were randomly divided into two groups (each including 62 subjects), namely modeling (MG) and validation groups (VG). First, in the modeling group, the average temperature of the rectum and skin over the carotid artery were measured separately. The appropriate model was determined, according to the significance of the model’s coefficients. The obtained model was used to predict the rectum temperature in the second group (VG group). Correlation of the predicted values with the real values (the measured rectum temperature) in the second group was investigated. Also, the difference in the average values of these two groups was examined in terms of significance. Results: In the modeling group, the average rectum and carotid temperatures were 36.47 ± 0.54°C and 35.45 ± 0.62°C, respectively. The final model was obtained, as follows: Carotid temperature × 0.561 + 16.583 = Rectum temperature. The predicted value was calculated based on the regression model and then compared with the measured rectum value, which showed no significant difference (P = 0.361). Conclusions: The present study was the first research, in which rectum temperature was compared with that

  3. Comparison of the scanning linear estimator (SLE) and ROI methods for quantitative SPECT imaging

    NASA Astrophysics Data System (ADS)

    Könik, Arda; Kupinski, Meredith; Hendrik Pretorius, P.; King, Michael A.; Barrett, Harrison H.

    2015-08-01

    In quantitative emission tomography, tumor activity is typically estimated from calculations on a region of interest (ROI) identified in the reconstructed slices. In these calculations, unpredictable bias arising from the null functions of the imaging system affects ROI estimates. The magnitude of this bias depends upon the tumor size and location. In prior work it has been shown that the scanning linear estimator (SLE), which operates on the raw projection data, is an unbiased estimator of activity when the size and location of the tumor are known. In this work, we performed analytic simulation of SPECT imaging with a parallel-hole medium-energy collimator. Distance-dependent system spatial resolution and non-uniform attenuation were included in the imaging simulation. We compared the task of activity estimation by the ROI and SLE methods for a range of tumor sizes (diameter: 1-3 cm) and activities (contrast ratio: 1-10) added to uniform and non-uniform liver backgrounds. Using the correct value for the tumor shape and location is an idealized approximation to how task estimation would occur clinically. Thus we determined how perturbing this idealized prior knowledge impacted the performance of both techniques. To implement the SLE for the non-uniform background, we used a novel iterative algorithm for pre-whitening stationary noise within a compact region. Estimation task performance was compared using the ensemble mean-squared error (EMSE) as the criterion. The SLE method performed substantially better than the ROI method (i.e. EMSE(SLE) was 23-174 times lower) when the background is uniform and tumor location and size are known accurately. The variance of the SLE increased when a non-uniform liver texture was introduced but the EMSE(SLE) continued to be 5-20 times lower than the ROI method. In summary, SLE outperformed ROI under almost all conditions that we tested.

  4. Comparison of the scanning linear estimator (SLE) and ROI methods for quantitative SPECT imaging.

    PubMed

    Könik, Arda; Kupinski, Meredith; Pretorius, P Hendrik; King, Michael A; Barrett, Harrison H

    2015-08-21

    In quantitative emission tomography, tumor activity is typically estimated from calculations on a region of interest (ROI) identified in the reconstructed slices. In these calculations, unpredictable bias arising from the null functions of the imaging system affects ROI estimates. The magnitude of this bias depends upon the tumor size and location. In prior work it has been shown that the scanning linear estimator (SLE), which operates on the raw projection data, is an unbiased estimator of activity when the size and location of the tumor are known. In this work, we performed analytic simulation of SPECT imaging with a parallel-hole medium-energy collimator. Distance-dependent system spatial resolution and non-uniform attenuation were included in the imaging simulation. We compared the task of activity estimation by the ROI and SLE methods for a range of tumor sizes (diameter: 1-3 cm) and activities (contrast ratio: 1-10) added to uniform and non-uniform liver backgrounds. Using the correct value for the tumor shape and location is an idealized approximation to how task estimation would occur clinically. Thus we determined how perturbing this idealized prior knowledge impacted the performance of both techniques. To implement the SLE for the non-uniform background, we used a novel iterative algorithm for pre-whitening stationary noise within a compact region. Estimation task performance was compared using the ensemble mean-squared error (EMSE) as the criterion. The SLE method performed substantially better than the ROI method (i.e. EMSE(SLE) was 23-174 times lower) when the background is uniform and tumor location and size are known accurately. The variance of the SLE increased when a non-uniform liver texture was introduced but the EMSE(SLE) continued to be 5-20 times lower than the ROI method. In summary, SLE outperformed ROI under almost all conditions that we tested. PMID:26247228

  5. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  6. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  7. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters.

    PubMed

    Cheng, Lishui; Hobbs, Robert F; Segars, Paul W; Sgouros, George; Frey, Eric C

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  8. Semi-quantitative method to estimate levels of Campylobacter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...

  9. A simple and accurate protocol for absolute polar metabolite quantification in cell cultures using quantitative nuclear magnetic resonance.

    PubMed

    Goldoni, Luca; Beringhelli, Tiziana; Rocchia, Walter; Realini, Natalia; Piomelli, Daniele

    2016-05-15

    Absolute analyte quantification by nuclear magnetic resonance (NMR) spectroscopy is rarely pursued in metabolomics, even though this would allow researchers to compare results obtained using different techniques. Here we report on a new protocol that permits, after pH-controlled serum protein removal, the sensitive quantification (limit of detection [LOD] = 5-25 μM) of hydrophilic nutrients and metabolites in the extracellular medium of cells in cultures. The method does not require the use of databases and uses PULCON (pulse length-based concentration determination) quantitative NMR to obtain results that are significantly more accurate and reproducible than those obtained by CPMG (Carr-Purcell-Meiboom-Gill) sequence or post-processing filtering approaches. Three practical applications of the method highlight its flexibility under different cell culture conditions. We identified and quantified (i) metabolic differences between genetically engineered human cell lines, (ii) alterations in cellular metabolism induced by differentiation of mouse myoblasts into myotubes, and (iii) metabolic changes caused by activation of neurotransmitter receptors in mouse myoblasts. Thus, the new protocol offers an easily implementable, efficient, and versatile tool for the investigation of cellular metabolism and signal transduction. PMID:26898303

  10. Development and evaluation of a liquid chromatography-mass spectrometry method for rapid, accurate quantitation of malondialdehyde in human plasma.

    PubMed

    Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H

    2016-09-01

    Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618

  11. Importance of housekeeping gene selection for accurate reverse transcription-quantitative polymerase chain reaction in a wound healing model.

    PubMed

    Turabelidze, Anna; Guo, Shujuan; DiPietro, Luisa A

    2010-01-01

    Studies in the field of wound healing have utilized a variety of different housekeeping genes for reverse transcription-quantitative polymerase chain reaction (RT-qPCR) analysis. However, nearly all of these studies assume that the selected normalization gene is stably expressed throughout the course of the repair process. The purpose of our current investigation was to identify the most stable housekeeping genes for studying gene expression in mouse wound healing using RT-qPCR. To identify which housekeeping genes are optimal for studying gene expression in wound healing, we examined all articles published in Wound Repair and Regeneration that cited RT-qPCR during the period of January/February 2008 until July/August 2009. We determined that ACTβ, GAPDH, 18S, and β2M were the most frequently used housekeeping genes in human, mouse, and pig studies. We also investigated nine commonly used housekeeping genes that are not generally used in wound healing models: GUS, TBP, RPLP2, ATP5B, SDHA, UBC, CANX, CYC1, and YWHAZ. We observed that wounded and unwounded tissues have contrasting housekeeping gene expression stability. The results demonstrate that commonly used housekeeping genes must be validated as accurate normalizing genes for each individual experimental condition. PMID:20731795

  12. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  13. Transient stochastic downscaling of quantitative precipitation estimates for hydrological applications

    NASA Astrophysics Data System (ADS)

    Nogueira, M.; Barros, A. P.

    2015-10-01

    Rainfall fields are heavily thresholded and highly intermittent resulting in large areas of zero values. This deforms their stochastic spatial scale-invariant behavior, introducing scaling breaks and curvature in the spatial scale spectrum. To address this problem, spatial scaling analysis was performed inside continuous rainfall features (CRFs) delineated via cluster analysis. The results show that CRFs from single realizations of hourly rainfall display ubiquitous multifractal behavior that holds over a wide range of scales (from ≈1 km up to 100's km). The results further show that the aggregate scaling behavior of rainfall fields is intrinsically transient with the scaling parameters explicitly dependent on the atmospheric environment. These findings provide a framework for robust stochastic downscaling, bridging the gap between spatial scales of observed and simulated rainfall fields and the high-resolution requirements of hydrometeorological and hydrological studies. Here, a fractal downscaling algorithm adapted to CRFs is presented and applied to generate stochastically downscaled hourly rainfall products from radar derived Stage IV (∼4 km grid resolution) quantitative precipitation estimates (QPE) over the Integrated Precipitation and Hydrology Experiment (IPHEx) domain in the southeast USA. The methodology can produce large ensembles of statistically robust high-resolution fields without additional data or any calibration requirements, conserving the coarse resolution information and generating coherent small-scale variability and field statistics, hence adding value to the original fields. Moreover, it is computationally inexpensive enabling fast production of high-resolution rainfall realizations with latency adequate for forecasting applications. When the transient nature of the scaling behavior is considered, the results show a better ability to reproduce the statistical structure of observed rainfall compared to using fixed scaling parameters

  14. A quantitative method for estimation of volume changes in arachnoid foveae with age.

    PubMed

    Duray, Stephen M; Martel, Stacie S

    2006-03-01

    Age-related changes of arachnoid foveae have been described, but objective, quantitative analyses are lacking. A new quantitative method is presented for estimation of change in total volume of arachnoid foveae with age. The pilot sample consisted of nine skulls from the Palmer Anatomy Laboratory. Arachnoid foveae were filled with sand, which was extracted using a vacuum pump. Mass was determined with an analytical balance and converted to volume. A reliability analysis was performed using intraclass correlation coefficients. The method was found to be highly reliable (intraobserver ICC = 0.9935, interobserver ICC = 0.9878). The relationship between total volume and age was then examined in a sample of 63 males of accurately known age from the Hamann-Todd collection. Linear regression analysis revealed no statistically significant relationship between total volume and age, or foveae frequency and age (alpha = 0.05). Development of arachnoid foveae may be influenced by health factors, which could limit its usefulness in aging. PMID:16566755

  15. An accurate modeling, simulation, and analysis tool for predicting and estimating Raman LIDAR system performance

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.

    2007-09-01

    BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.

  16. Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...

  17. Estimation method of point spread function based on Kalman filter for accurately evaluating real optical properties of photonic crystal fibers.

    PubMed

    Shen, Yan; Lou, Shuqin; Wang, Xin

    2014-03-20

    The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters. PMID:24663461

  18. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes. PMID:22340891

  19. Technical note: tree truthing: how accurate are substrate estimates in primate field studies?

    PubMed

    Bezanson, Michelle; Watts, Sean M; Jobin, Matthew J

    2012-04-01

    Field studies of primate positional behavior typically rely on ground-level estimates of substrate size, angle, and canopy location. These estimates potentially influence the identification of positional modes by the observer recording behaviors. In this study we aim to test ground-level estimates against direct measurements of support angles, diameters, and canopy heights in trees at La Suerte Biological Research Station in Costa Rica. After reviewing methods that have been used by past researchers, we provide data collected within trees that are compared to estimates obtained from the ground. We climbed five trees and measured 20 supports. Four observers collected measurements of each support from different locations on the ground. Diameter estimates varied from the direct tree measures by 0-28 cm (Mean: 5.44 ± 4.55). Substrate angles varied by 1-55° (Mean: 14.76 ± 14.02). Height in the tree was best estimated using a clinometer as estimates with a two-meter reference placed by the tree varied by 3-11 meters (Mean: 5.31 ± 2.44). We determined that the best support size estimates were those generated relative to the size of the focal animal and divided into broader categories. Support angles were best estimated in 5° increments and then checked using a Haglöf clinometer in combination with a laser pointer. We conclude that three major factors should be addressed when estimating support features: observer error (e.g., experience and distance from the target), support deformity, and how support size and angle influence the positional mode selected by a primate individual. individual. PMID:22371099

  20. Quantitative estimation of sampling uncertainties for mycotoxins in cereal shipments.

    PubMed

    Bourgeois, F S; Lyman, G J

    2012-01-01

    Many countries receive shipments of bulk cereals from primary producers. There is a volume of work that is on-going that seeks to arrive at appropriate standards for the quality of the shipments and the means to assess the shipments as they are out-loaded. Of concern are mycotoxin and heavy metal levels, pesticide and herbicide residue levels, and contamination by genetically modified organisms (GMOs). As the ability to quantify these contaminants improves through improved analytical techniques, the sampling methodologies applied to the shipments must also keep pace to ensure that the uncertainties attached to the sampling procedures do not overwhelm the analytical uncertainties. There is a need to understand and quantify sampling uncertainties under varying conditions of contamination. The analysis required is statistical and is challenging as the nature of the distribution of contaminants within a shipment is not well understood; very limited data exist. Limited work has been undertaken to quantify the variability of the contaminant concentrations in the flow of grain coming from a ship and the impact that this has on the variance of sampling. Relatively recent work by Paoletti et al. in 2006 [Paoletti C, Heissenberger A, Mazzara M, Larcher S, Grazioli E, Corbisier P, Hess N, Berben G, Lübeck PS, De Loose M, et al. 2006. Kernel lot distribution assessment (KeLDA): a study on the distribution of GMO in large soybean shipments. Eur Food Res Tech. 224:129-139] provides some insight into the variation in GMO concentrations in soybeans on cargo out-turn. Paoletti et al. analysed the data using correlogram analysis with the objective of quantifying the sampling uncertainty (variance) that attaches to the final cargo analysis, but this is only one possible means of quantifying sampling uncertainty. It is possible that in many cases the levels of contamination passing the sampler on out-loading are essentially random, negating the value of variographic quantitation of

  1. Selection of accurate reference genes in mouse trophoblast stem cells for reverse transcription-quantitative polymerase chain reaction.

    PubMed

    Motomura, Kaori; Inoue, Kimiko; Ogura, Atsuo

    2016-06-17

    Mouse trophoblast stem cells (TSCs) form colonies of different sizes and morphologies, which might reflect their degrees of differentiation. Therefore, each colony type can have a characteristic gene expression profile; however, the expression levels of internal reference genes may also change, causing fluctuations in their estimated gene expression levels. In this study, we validated seven housekeeping genes by using a geometric averaging method and identified Gapdh as the most stable gene across different colony types. Indeed, when Gapdh was used as the reference, expression levels of Elf5, a TSC marker gene, stringently classified TSC colonies into two groups: a high expression groups consisting of type 1 and 2 colonies, and a lower expression group consisting of type 3 and 4 colonies. This clustering was consistent with our putative classification of undifferentiated/differentiated colonies based on their time-dependent colony transitions. By contrast, use of an unstable reference gene (Rn18s) allowed no such clear classification. Cdx2, another TSC marker, did not show any significant colony type-specific expression pattern irrespective of the reference gene. Selection of stable reference genes for quantitative gene expression analysis might be critical, especially when cell lines consisting of heterogeneous cell populations are used. PMID:26853688

  2. Selection of accurate reference genes in mouse trophoblast stem cells for reverse transcription-quantitative polymerase chain reaction

    PubMed Central

    MOTOMURA, Kaori; INOUE, Kimiko; OGURA, Atsuo

    2016-01-01

    Mouse trophoblast stem cells (TSCs) form colonies of different sizes and morphologies, which might reflect their degrees of differentiation. Therefore, each colony type can have a characteristic gene expression profile; however, the expression levels of internal reference genes may also change, causing fluctuations in their estimated gene expression levels. In this study, we validated seven housekeeping genes by using a geometric averaging method and identified Gapdh as the most stable gene across different colony types. Indeed, when Gapdh was used as the reference, expression levels of Elf5, a TSC marker gene, stringently classified TSC colonies into two groups: a high expression groups consisting of type 1 and 2 colonies, and a lower expression group consisting of type 3 and 4 colonies. This clustering was consistent with our putative classification of undifferentiated/differentiated colonies based on their time-dependent colony transitions. By contrast, use of an unstable reference gene (Rn18s) allowed no such clear classification. Cdx2, another TSC marker, did not show any significant colony type-specific expression pattern irrespective of the reference gene. Selection of stable reference genes for quantitative gene expression analysis might be critical, especially when cell lines consisting of heterogeneous cell populations are used. PMID:26853688

  3. Improving satellite quantitative precipitation estimates by incorporating deep convective cloud optical depth

    NASA Astrophysics Data System (ADS)

    Stenz, Ronald D.

    As Deep Convective Systems (DCSs) are responsible for most severe weather events, increased understanding of these systems along with more accurate satellite precipitation estimates will improve NWS (National Weather Service) warnings and monitoring of hazardous weather conditions. A DCS can be classified into convective core (CC) regions (heavy rain), stratiform (SR) regions (moderate-light rain), and anvil (AC) regions (no rain). These regions share similar infrared (IR) brightness temperatures (BT), which can create large errors for many existing rain detection algorithms. This study assesses the performance of the National Mosaic and Multi-sensor Quantitative Precipitation Estimation System (NMQ) Q2, and a simplified version of the GOES-R Rainfall Rate algorithm (also known as the Self-Calibrating Multivariate Precipitation Retrieval, or SCaMPR), over the state of Oklahoma (OK) using OK MESONET observations as ground truth. While the average annual Q2 precipitation estimates were about 35% higher than MESONET observations, there were very strong correlations between these two data sets for multiple temporal and spatial scales. Additionally, the Q2 estimated precipitation distributions over the CC, SR, and AC regions of DCSs strongly resembled the MESONET observed ones, indicating that Q2 can accurately capture the precipitation characteristics of DCSs although it has a wet bias . SCaMPR retrievals were typically three to four times higher than the collocated MESONET observations, with relatively weak correlations during a year of comparisons in 2012. Overestimates from SCaMPR retrievals that produced a high false alarm rate were primarily caused by precipitation retrievals from the anvil regions of DCSs when collocated MESONET stations recorded no precipitation. A modified SCaMPR retrieval algorithm, employing both cloud optical depth and IR temperature, has the potential to make significant improvements to reduce the SCaMPR false alarm rate of retrieved

  4. Robust quantitative parameter estimation by advanced CMP measurements for vadose zone hydrological studies

    NASA Astrophysics Data System (ADS)

    Koyama, C.; Wang, H.; Khuut, T.; Kawai, T.; Sato, M.

    2015-12-01

    Soil moisture plays a crucial role in the understanding of processes in the vadose zone hydrology. In the last two decades ground penetrating radar (GPR) has been widely discussed has nondestructive measurement technique for soil moisture data. Especially the common mid-point (CMP) technique, which has been used in both seismic and GPR surveys to investigate the vertical velocity profiles, has a very high potential for quantitaive obervsations from the root zone to the ground water aquifer. However, the use is still rather limited today and algorithms for robust quantitative paramter estimation are lacking. In this study we develop an advanced processing scheme for operational soil moisture reetrieval at various depth. Using improved signal processing, together with a semblance - non-normalized cross-correlation sum combined stacking approach and the Dix formula, the interval velocities for multiple soil layers are obtained from the RMS velocities allowing for more accurate estimation of the permittivity at the reflecting point. Where the presence of a water saturated layer, like a groundwater aquifer, can be easily identified by its RMS velocity due to the high contrast compared to the unsaturated zone. By using a new semi-automated measurement technique the acquisition time for a full CMP gather with 1 cm intervals along a 10 m profile can be reduced significantly to under 2 minutes. The method is tested and validated under laboratory conditions in a sand-pit as well as on agricultural fields and beach sand in the Sendai city area. Comparison between CMP estimates and TDR measurements yield a very good agreement with RMSE of 1.5 Vol.-%. The accuracy of depth estimation is validated with errors smaller than 2%. Finally, we demonstrate application of the method in a test site in semi-arid Mongolia, namely the Orkhon River catchment in Bulgan, using commercial 100 MHz and 500 MHz RAMAC GPR antennas. The results demonstrate the suitability of the proposed method for

  5. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.

  6. Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter

    NASA Astrophysics Data System (ADS)

    Strano, Salvatore; Terzo, Mario

    2016-06-01

    The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.

  7. Quantitative abundance estimates from bidirectional reflectance measurements. [for planetary surfaces

    NASA Technical Reports Server (NTRS)

    Mustard, John F.; Pieters, Carle M.

    1987-01-01

    A simplified approach for estimating mineral abundances in mineral mixtures from bidirectional reflectance measurements is presented. Fundamental to this approach is a priori information concerning reflectance spectra of the individual minerals and an estimate of the particle sizes of the components. Simplified equations for bidirectional reflectance are used to linearize the systematics of spectral mixing. The method was used to determine the relative proportions of olivine, magnetite, enstatite, and anorthite in a mixture; the mass fractions of mixture components were calculated on the basis of known particle diameters. The results indicate that for materials without strongly adsorbing components, the accuracy of abundance determinations is better than 5 percent.

  8. The Centiloid Project: Standardizing Quantitative Amyloid Plaque Estimation by PET

    PubMed Central

    Klunk, William E.; Koeppe, Robert A.; Price, Julie C.; Benzinger, Tammie; Devous, Michael D.; Jagust, William; Johnson, Keith; Mathis, Chester A.; Minhas, Davneet; Pontecorvo, Michael J.; Rowe, Christopher C.; Skovronsky, Daniel; Mintun, Mark

    2014-01-01

    Although amyloid imaging with PiB-PET, and now with F-18-labelled tracers, has produced remarkably consistent qualitative findings across a large number of centers, there has been considerable variability in the exact numbers reported as quantitative outcome measures of tracer retention. In some cases this is as trivial as the choice of units, in some cases it is scanner dependent, and of course, different tracers yield different numbers. Our working group was formed to standardize quantitative amyloid imaging measures by scaling the outcome of each particular analysis method or tracer to a 0 to 100 scale, anchored by young controls (≤45 years) and typical Alzheimer’s disease patients. The units of this scale have been named “Centiloids.” Basically, we describe a “standard” method of analyzing PiB PET data and then a method for scaling any “non-standard” method of PiB PET analysis (or any other tracer) to the Centiloid scale. PMID:25443857

  9. FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+

    NASA Astrophysics Data System (ADS)

    Sahoo, B. K.

    2010-12-01

    We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.

  10. Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Sanner, Robert M.

    2006-01-01

    Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.

  11. Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle

    NASA Technical Reports Server (NTRS)

    VanEepoel, John; Thienel, Julie; Sanner, Robert M.

    2006-01-01

    In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.

  12. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    PubMed Central

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  13. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  14. Spectral estimation from laser scanner data for accurate color rendering of objects

    NASA Astrophysics Data System (ADS)

    Baribeau, Rejean

    2002-06-01

    Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.

  15. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1985-01-01

    Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.

  16. Accurate radiocarbon age estimation using "early" measurements: a new approach to reconstructing the Paleolithic absolute chronology

    NASA Astrophysics Data System (ADS)

    Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru

    2014-05-01

    This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.

  17. A Generalized Subspace Least Mean Square Method for High-resolution Accurate Estimation of Power System Oscillation Modes

    SciTech Connect

    Zhang, Peng; Zhou, Ning; Abdollahi, Ali

    2013-09-10

    A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.

  18. Accurate motion parameter estimation for colonoscopy tracking using a regression method

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2010-03-01

    Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.

  19. Accurate Angle Estimator for High-Frame-Rate 2-D Vector Flow Imaging.

    PubMed

    Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Hansen, Kristoffer Lindskov; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2016-06-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360° uniform behavior on the angle estimation is observed with a median angle bias of 1.01° and a median angle SD of 1.8°. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5° with SDs around 1°. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles. PMID:27093598

  20. Improved quantitative visualization of hypervelocity flow through wavefront estimation based on shadow casting of sinusoidal gratings.

    PubMed

    Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2016-08-01

    A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts. PMID:27505389

  1. Accurate estimation of influenza epidemics using Google search data via ARGO.

    PubMed

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  2. Accurate estimation of influenza epidemics using Google search data via ARGO

    PubMed Central

    Yang, Shihao; Santillana, Mauricio; Kou, S. C.

    2015-01-01

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  3. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  4. Techniques for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, Michael R.; Bland, Roger

    1999-01-01

    An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.

  5. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1991-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  6. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  7. Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?

    NASA Astrophysics Data System (ADS)

    Ramarohetra, J.; Sultan, B.

    2012-04-01

    Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and

  8. Plant DNA Barcodes Can Accurately Estimate Species Richness in Poorly Known Floras

    PubMed Central

    Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew

    2011-01-01

    Background Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Methodology/Principal Findings Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. Conclusions/Significance We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways. PMID:22096501

  9. Accurate estimation of retinal vessel width using bagged decision trees and an extended multiresolution Hermite model.

    PubMed

    Lupaşcu, Carmen Alina; Tegolo, Domenico; Trucco, Emanuele

    2013-12-01

    We present an algorithm estimating the width of retinal vessels in fundus camera images. The algorithm uses a novel parametric surface model of the cross-sectional intensities of vessels, and ensembles of bagged decision trees to estimate the local width from the parameters of the best-fit surface. We report comparative tests with REVIEW, currently the public database of reference for retinal width estimation, containing 16 images with 193 annotated vessel segments and 5066 profile points annotated manually by three independent experts. Comparative tests are reported also with our own set of 378 vessel widths selected sparsely in 38 images from the Tayside Scotland diabetic retinopathy screening programme and annotated manually by two clinicians. We obtain considerably better accuracies compared to leading methods in REVIEW tests and in Tayside tests. An important advantage of our method is its stability (success rate, i.e., meaningful measurement returned, of 100% on all REVIEW data sets and on the Tayside data set) compared to a variety of methods from the literature. We also find that results depend crucially on testing data and conditions, and discuss criteria for selecting a training set yielding optimal accuracy. PMID:24001930

  10. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions.

    PubMed

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985

  11. [Quantitative estimation of evapotranspiration from Tahe forest ecosystem, Northeast China].

    PubMed

    Qu, Di; Fan, Wen-Yi; Yang, Jin-Ming; Wang, Xu-Peng

    2014-06-01

    Evapotranspiration (ET) is an important parameter of agriculture, meteorology and hydrology research, and also an important part of the global hydrological cycle. This paper applied the improved DHSVM distributed hydrological model to estimate daily ET of Tahe area in 2007 using leaf area index and other surface data extracted TM remote sensing data, and slope, aspect and other topographic indices obtained by using the digital elevation model. The relationship between daily ET and daily watershed outlet flow was built by the BP neural network, and a water balance equation was established for the studied watershed, together to test the accuracy of the estimation. The results showed that the model could be applied in the study area. The annual total ET of Tahe watershed was 234.01 mm. ET had a significant seasonal variation. The ET had the highest value in summer and the average daily ET value was 1.56 mm. The average daily ET in autumn and spring were 0.30, 0.29 mm, respectively, and winter had the lowest ET value. Land cover type had a great effect on ET value, and the broadleaf forest had a higher ET ability than the mixed forest, followed by the needle leaf forest. PMID:25223020

  12. Higher Accurate Estimation of Axial and Bending Stiffnesses of Plates Clamped by Bolts

    NASA Astrophysics Data System (ADS)

    Naruse, Tomohiro; Shibutani, Yoji

    Equivalent stiffness of clamped plates should be prescribed not only to evaluate the strength of bolted joints by the scheme of “joint diagram” but also to make structural analyses for practical structures with many bolted joints. We estimated the axial stiffness and bending stiffness of clamped plates by using Finite Element (FE) analyses while taking the contact condition on bearing surfaces and between the plates into account. The FE models were constructed for bolted joints tightened with M8, 10, 12 and 16 bolts and plate thicknesses of 3.2, 4.5, 6.0 and 9.0 mm, and the axial and bending compliances were precisely evaluated. These compliances of clamped plates were compared with those from VDI 2230 (2003) code, in which the equivalent conical compressive stress field in the plate has been assumed. The code gives larger axial stiffness for 11% and larger bending stiffness for 22%, and it cannot apply to the clamped plates with different thickness. Thus the code shall give lower bolt stress (unsafe estimation). We modified the vertical angle tangent, tanφ, of the equivalent conical by adding a term of the logarithm of thickness ratio t1/t2 and by fitting to the analysis results. The modified tanφ can estimate the axial compliance with the error from -1.5% to 6.8% and the bending compliance with the error from -6.5% to 10%. Furthermore, the modified tanφ can take the thickness difference into consideration.

  13. Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.

    PubMed

    Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  14. Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes

    PubMed Central

    Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  15. Quantitative and Rapid Estimation of H+ Fluxes in Membrane Vesicles 1

    PubMed Central

    Jennings, Ian R.; Rea, Philip A.; Leigh, Roger A.; Sanders, Dale

    1988-01-01

    Proton transport is often visualized in membrane vesicles by use of fluorescent monoamines which accumulate in acidic intravesicular compartments and undergo concentration-dependent fluorescence quenching. Software for an IBM microcomputer is described which permits logging and editing of changes in fluorescence monitored by a Perkin-Elmer LS-5 luminescence spectrometer. An accurate estimate of the instantaneous rate of fluorescence quenching or recovery is then facilitated by least squares fitting of fluorescence data to a nonlinear function. The software is tested with tonoplast vesicles from Beta vulgaris. Quenching of acridine orange fluorescence by ATP-driven (primary) transport and relaxation of quenching by Na+/H+ antiport can both be fitted with single exponential functions. Initial rates of ATP- and Na+ -dependent fluorescence changes are derived and can be used for Km determinations. The method constitutes a simple and efficient alternative to manual analysis of analog fluorescence traces and results in a reliable quantitative measurement of the relative rate of proton transport in membrane vesicle preparations. PMID:16666064

  16. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    NASA Astrophysics Data System (ADS)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  17. An Energy-Efficient Strategy for Accurate Distance Estimation in Wireless Sensor Networks

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2012-01-01

    In line with recent research efforts made to conceive energy saving protocols and algorithms and power sensitive network architectures, in this paper we propose a transmission strategy to minimize the energy consumption in a sensor network when using a localization technique based on the measurement of the strength (RSS) or the time of arrival (TOA) of the received signal. In particular, we find the transmission power and the packet transmission rate that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the RSS or TOA measurements. We also propose some corrections to these theoretical results to take into account the effects of shadowing and packet loss in the propagation channel. The proposed strategy is shown to be effective in realistic scenarios providing energy savings with respect to other transmission strategies, and also guaranteeing a given accuracy in the distance estimations, which will serve to guarantee a desired accuracy in the localization result. PMID:23202218

  18. [Research on maize multispectral image accurate segmentation and chlorophyll index estimation].

    PubMed

    Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e

    2015-01-01

    In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray

  19. The challenges of accurately estimating time of long bone injury in children.

    PubMed

    Pickett, Tracy A

    2015-07-01

    The ability to determine the time an injury occurred can be of crucial significance in forensic medicine and holds special relevance to the investigation of child abuse. However, dating paediatric long bone injury, including fractures, is nuanced by complexities specific to the paediatric population. These challenges include the ability to identify bone injury in a growing or only partially-calcified skeleton, different injury patterns seen within the spectrum of the paediatric population, the effects of bone growth on healing as a separate entity from injury, differential healing rates seen at different ages, and the relative scarcity of information regarding healing rates in children, especially the very young. The challenges posed by these factors are compounded by a lack of consistency in defining and categorizing healing parameters. This paper sets out the primary limitations of existing knowledge regarding estimating timing of paediatric bone injury. Consideration and understanding of the multitude of factors affecting bone injury and healing in children will assist those providing opinion in the medical-legal forum. PMID:26048508

  20. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  1. Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.

    PubMed

    Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M

    2016-08-01

    Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835

  2. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  3. Possibility of quantitative estimation of blood cell forms by the spatial-frequency spectrum analysis

    NASA Astrophysics Data System (ADS)

    Spiridonov, Igor N.; Safonova, Larisa P.; Samorodov, Andrey V.

    2000-05-01

    At present in hematology there are no quantitative estimates of such important for the cell classification parameters: cell form and nuclear form. Due to the absence of the correlation between morphological parameters and parameters measured by hemoanalyzers, both flow cytometers and computer recognition systems, do not provide the completeness of the clinical blood analysis. Analysis of the spatial-frequency spectra of blood samples (smears and liquid probes) permit the estimate the forms quantitatively. On the results of theoretical and experimental researches carried out an algorithm of the form quantitative estimation by means of SFS parameters has been created. The criteria of the quality of these estimates have been proposed. A test bench based on the coherent optical and digital processors. The received results could be applied for the automated classification of ether normal or pathological blood cells in the standard blood smears.

  4. ProViDE: A software tool for accurate estimation of viral diversity in metagenomic samples

    PubMed Central

    Ghosh, Tarini Shankar; Mohammed, Monzoorul Haque; Komanduri, Dinakar; Mande, Sharmila Shekhar

    2011-01-01

    Given the absence of universal marker genes in the viral kingdom, researchers typically use BLAST (with stringent E-values) for taxonomic classification of viral metagenomic sequences. Since majority of metagenomic sequences originate from hitherto unknown viral groups, using stringent e-values results in most sequences remaining unclassified. Furthermore, using less stringent e-values results in a high number of incorrect taxonomic assignments. The SOrt-ITEMS algorithm provides an approach to address the above issues. Based on alignment parameters, SOrt-ITEMS follows an elaborate work-flow for assigning reads originating from hitherto unknown archaeal/bacterial genomes. In SOrt-ITEMS, alignment parameter thresholds were generated by observing patterns of sequence divergence within and across various taxonomic groups belonging to bacterial and archaeal kingdoms. However, many taxonomic groups within the viral kingdom lack a typical Linnean-like taxonomic hierarchy. In this paper, we present ProViDE (Program for Viral Diversity Estimation), an algorithm that uses a customized set of alignment parameter thresholds, specifically suited for viral metagenomic sequences. These thresholds capture the pattern of sequence divergence and the non-uniform taxonomic hierarchy observed within/across various taxonomic groups of the viral kingdom. Validation results indicate that the percentage of ‘correct’ assignments by ProViDE is around 1.7 to 3 times higher than that by the widely used similarity based method MEGAN. The misclassification rate of ProViDE is around 3 to 19% (as compared to 5 to 42% by MEGAN) indicating significantly better assignment accuracy. ProViDE software and a supplementary file (containing supplementary figures and tables referred to in this article) is available for download from http://metagenomics.atc.tcs.com/binning/ProViDE/ PMID:21544173

  5. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  6. Quantitative coronary angiography using image recovery techniques for background estimation in unsubtracted images

    SciTech Connect

    Wong, Jerry T.; Kamyar, Farzad; Molloi, Sabee

    2007-10-15

    in the differences between measured iodine mass in left anterior descending arteries using DSA and LA, MF, LI, or CDD were calculated. The standard deviations in the DSA-LA and DSA-MF differences (both {approx}21 mg) were approximately a factor of 3 greater than that of the DSA-LI and DSA-CDD differences (both {approx}7 mg). Local averaging and morphological filtering were considered inadequate for use in quantitative densitometry. Linear interpolation and curvature-driven diffusion image inpainting were found to be effective techniques for use with densitometry in quantifying iodine mass in vitro and in vivo. They can be used with unsubtracted images to estimate background anatomical signals and obtain accurate densitometry results. The high level of accuracy and precision in quantification associated with using LI and CDD suggests the potential of these techniques in applications where background mask images are difficult to obtain, such as lumen volume and blood flow quantification using coronary arteriography.

  7. Figure of merit of diamond power devices based on accurately estimated impact ionization processes

    NASA Astrophysics Data System (ADS)

    Hiraiwa, Atsushi; Kawarada, Hiroshi

    2013-07-01

    Although a high breakdown voltage or field is considered as a major advantage of diamond, there has been a large difference in breakdown voltages or fields of diamond devices in literature. Most of these apparently contradictory results did not correctly reflect material properties because of specific device designs, such as punch-through structure and insufficient edge termination. Once these data were removed, the remaining few results, including a record-high breakdown field of 20 MV/cm, were theoretically reproduced, exactly calculating ionization integrals based on the ionization coefficients that were obtained after compensating for possible errors involved in reported theoretical values. In this compensation, we newly developed a method for extracting an ionization coefficient from an arbitrary relationship between breakdown voltage and doping density in the Chynoweth's framework. The breakdown field of diamond was estimated to depend on the doping density more than other materials, and accordingly required to be compared at the same doping density. The figure of merit (FOM) of diamond devices, obtained using these breakdown data, was comparable to the FOMs of 4H-SiC and Wurtzite-GaN devices at room temperature, but was projected to be larger than the latter by more than one order of magnitude at higher temperatures about 300 °C. Considering the relatively undeveloped state of diamond technology, there is room for further enhancement of the diamond FOM, improving breakdown voltage and mobility. Through these investigations, junction breakdown was found to be initiated by electrons or holes in a p--type or n--type drift layer, respectively. The breakdown voltages in the two types of drift layers differed from each other in a strict sense but were practically the same. Hence, we do not need to care about the conduction type of drift layers, but should rather exactly calculate the ionization integral without approximating ionization coefficients by a power

  8. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    NASA Astrophysics Data System (ADS)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  9. Quantitative Proteome Analysis of Human Plasma Following in vivo Lipopolysaccharide Administration using 16O/18O Labeling and the Accurate Mass and Time Tag Approach

    PubMed Central

    Qian, Wei-Jun; Monroe, Matthew E.; Liu, Tao; Jacobs, Jon M.; Anderson, Gordon A.; Shen, Yufeng; Moore, Ronald J.; Anderson, David J.; Zhang, Rui; Calvano, Steve E.; Lowry, Stephen F.; Xiao, Wenzhong; Moldawer, Lyle L.; Davis, Ronald W.; Tompkins, Ronald G.; Camp, David G.; Smith, Richard D.

    2007-01-01

    SUMMARY Identification of novel diagnostic or therapeutic biomarkers from human blood plasma would benefit significantly from quantitative measurements of the proteome constituents over a range of physiological conditions. Herein we describe an initial demonstration of proteome-wide quantitative analysis of human plasma. The approach utilizes post-digestion trypsin-catalyzed 16O/18O peptide labeling, two-dimensional liquid chromatography (LC)-Fourier transform ion cyclotron resonance ((FTICR) mass spectrometry, and the accurate mass and time (AMT) tag strategy to identify and quantify peptides/proteins from complex samples. A peptide accurate mass and LC-elution time AMT tag database was initially generated using tandem mass spectrometry (MS/MS) following extensive multidimensional LC separations to provide the basis for subsequent peptide identifications. The AMT tag database contains >8,000 putative identified peptides, providing 938 confident plasma protein identifications. The quantitative approach was applied without depletion for high abundant proteins for comparative analyses of plasma samples from an individual prior to and 9 h after lipopolysaccharide (LPS) administration. Accurate quantification of changes in protein abundance was demonstrated by both 1:1 labeling of control plasma and the comparison between the plasma samples following LPS administration. A total of 429 distinct plasma proteins were quantified from the comparative analyses and the protein abundances for 25 proteins, including several known inflammatory response mediators, were observed to change significantly following LPS administration. PMID:15753121

  10. Toward an Accurate and Inexpensive Estimation of CCSD(T)/CBS Binding Energies of Large Water Clusters.

    PubMed

    Sahu, Nityananda; Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R

    2016-07-21

    Owing to the steep scaling behavior, highly accurate CCSD(T) calculations, the contemporary gold standard of quantum chemistry, are prohibitively difficult for moderate- and large-sized water clusters even with the high-end hardware. The molecular tailoring approach (MTA), a fragmentation-based technique is found to be useful for enabling such high-level ab initio calculations. The present work reports the CCSD(T) level binding energies of many low-lying isomers of large (H2O)n (n = 16, 17, and 25) clusters employing aug-cc-pVDZ and aug-cc-pVTZ basis sets within the MTA framework. Accurate estimation of the CCSD(T) level binding energies [within 0.3 kcal/mol of the respective full calculation (FC) results] is achieved after effecting the grafting procedure, a protocol for minimizing the errors in the MTA-derived energies arising due to the approximate nature of MTA. The CCSD(T) level grafting procedure presented here hinges upon the well-known fact that the MP2 method, which scales as O(N(5)), can be a suitable starting point for approximating to the highly accurate CCSD(T) [that scale as O(N(7))] energies. On account of the requirement of only an MP2-level FC on the entire cluster, the current methodology ultimately leads to a cost-effective solution for the CCSD(T) level accurate binding energies of large-sized water clusters even at the complete basis set limit utilizing off-the-shelf hardware. PMID:27351269

  11. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    PubMed Central

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  12. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    PubMed

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  13. Improving high-resolution quantitative precipitation estimation via fusion of multiple radar-based precipitation products

    NASA Astrophysics Data System (ADS)

    Rafieeinasab, Arezoo; Norouzi, Amir; Seo, Dong-Jun; Nelson, Brian

    2015-12-01

    For monitoring and prediction of water-related hazards in urban areas such as flash flooding, high-resolution hydrologic and hydraulic modeling is necessary. Because of large sensitivity and scale dependence of rainfall-runoff models to errors in quantitative precipitation estimates (QPE), it is very important that the accuracy of QPE be improved in high-resolution hydrologic modeling to the greatest extent possible. With the availability of multiple radar-based precipitation products in many areas, one may now consider fusing them to produce more accurate high-resolution QPE for a wide spectrum of applications. In this work, we formulate and comparatively evaluate four relatively simple procedures for such fusion based on Fisher estimation and its conditional bias-penalized variant: Direct Estimation (DE), Bias Correction (BC), Reduced-Dimension Bias Correction (RBC) and Simple Estimation (SE). They are applied to fuse the Multisensor Precipitation Estimator (MPE) and radar-only Next Generation QPE (Q2) products at the 15-min 1-km resolution (Experiment 1), and the MPE and Collaborative Adaptive Sensing of the Atmosphere (CASA) QPE products at the 15-min 500-m resolution (Experiment 2). The resulting fused estimates are evaluated using the 15-min rain gauge observations from the City of Grand Prairie in the Dallas-Fort Worth Metroplex (DFW) in north Texas. The main criterion used for evaluation is that the fused QPE improves over the ingredient QPEs at their native spatial resolutions, and that, at the higher resolution, the fused QPE improves not only over the ingredient higher-resolution QPE but also over the ingredient lower-resolution QPE trivially disaggregated using the ingredient high-resolution QPE. All four procedures assume that the ingredient QPEs are unbiased, which is not likely to hold true in reality even if real-time bias correction is in operation. To test robustness under more realistic conditions, the fusion procedures were evaluated with and

  14. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT.

    PubMed

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R; La Riviere, Patrick J; Alessio, Adam M

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)(-1), cardiac output = 3, 5, 8 L min(-1)). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  15. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  16. Quantitative Proteome Analysis of Human Plasma Following in vivo Lipopolysaccharide Administration using O-16/O-18 Labeling and the Accurate Mass and Time Tag Approach

    SciTech Connect

    Qian, Weijun; Monroe, Matthew E.; Liu, Tao; Jacobs, Jon M.; Anderson, Gordon A.; Shen, Yufeng; Moore, Ronald J.; Anderson, David J.; Zhang, Rui; Calvano, Steven E.; Lowry, Stephen F.; Xiao, Wenzhong; Moldawer, Lyle L.; Davis, Ronald W.; Tompkins, Ronald G.; Camp, David G.; Smith, Richard D.

    2005-05-01

    Identification of novel diagnostic or therapeutic biomarkers from human blood plasma would benefit significantly from quantitative measurements of the proteome constituents over a range of physiological conditions. We describe here an initial demonstration of proteome-wide quantitative analysis of human plasma. The approach utilizes post-digestion trypsin-catalyzed 16O/18O labeling, two-dimensional liquid chromatography (LC)-Fourier transform ion cyclotron resonance ((FTICR) mass spectrometry, and the accurate mass and time (AMT) tag strategy for identification and quantification of peptides/proteins from complex samples. A peptide mass and time tag database was initially generated using tandem mass spectrometry (MS/MS) following extensive multidimensional LC separations and the database serves as a ‘look-up’ table for peptide identification. The mass and time tag database contains >8,000 putative identified peptides, which yielded 938 confident plasma protein identifications. The quantitative approach was applied to the comparative analyses of plasma samples from an individual prior to and 9 hours after lipopolysaccharide (LPS) administration without depletion of high abundant proteins. Accurate quantification of changes in protein abundance was demonstrated with both 1:1 labeling of control plasma and the comparison between the plasma samples following LPS administration. A total of 429 distinct plasma proteins were quantified from the comparative analyses and the protein abundances for 28 proteins were observed to be significantly changed following LPS administration, including several known inflammatory response mediators.

  17. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    PubMed Central

    2011-01-01

    Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645

  18. Self-aliquoting microarray plates for accurate quantitative matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Pabst, Martin; Fagerer, Stephan R; Köhling, Rudolf; Küster, Simon K; Steinhoff, Robert; Badertscher, Martin; Wahl, Fabian; Dittrich, Petra S; Jefimovs, Konstantins; Zenobi, Renato

    2013-10-15

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a fast analysis tool employed for the detection of a broad range of analytes. However, MALDI-MS has a reputation of not being suitable for quantitative analysis. Inhomogeneous analyte/matrix co-crystallization, spot-to-spot inhomogeneity, as well as a typically low number of replicates are the main contributing factors. Here, we present a novel MALDI sample target for quantitative MALDI-MS applications, which addresses the limitations mentioned above. The platform is based on the recently developed microarray for mass spectrometry (MAMS) technology and contains parallel lanes of hydrophilic reservoirs. Samples are not pipetted manually but deposited by dragging one or several sample droplets with a metal sliding device along these lanes. Sample is rapidly and automatically aliquoted into the sample spots due to the interplay of hydrophilic/hydrophobic interactions. With a few microliters of sample, it is possible to aliquot up to 40 replicates within seconds, each aliquot containing just 10 nL. The analyte droplet dries immediately and homogeneously, and consumption of the whole spot during MALDI-MS analysis is typically accomplished within few seconds. We evaluated these sample targets with respect to their suitability for use with different samples and matrices. Furthermore, we tested their application for generating calibration curves of standard peptides with α-cyano-4-hdydroxycinnamic acid as a matrix. For angiotensin II and [Glu(1)]-fibrinopeptide B we achieved coefficients of determination (r(2)) greater than 0.99 without the use of internal standards. PMID:24003910

  19. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the exemplar…

  20. Quantitative Estimation of Thermal Contact Conductance for a Real Front-end Component

    NASA Astrophysics Data System (ADS)

    Sano, Mutsumi; Takahashi, Sunao; Mochizuki, Tetsuro; Watanabe, Atsuo; Oura, Masaki; Kitamura, Hideo

    2007-01-01

    The thermal contact conductance (TCC) of a real front-end component at SPring-8 was estimated quantitatively by comparing the results of the experiments and those of the finite element analyses. This study contributes to the ongoing program to inquire into the real thermal resistance including the life of all front-end high-heat-load components.

  1. Quantitative Estimates of the Social Benefits of Learning, 1: Crime. Wider Benefits of Learning Research Report.

    ERIC Educational Resources Information Center

    Feinstein, Leon

    The cost benefits of lifelong learning in the United Kingdom were estimated, based on quantitative evidence. Between 1975-1996, 43 police force areas in England and Wales were studied to determine the effect of wages on crime. It was found that a 10 percent rise in the average pay of those on low pay reduces the overall area property crime rate by…

  2. A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY

    EPA Science Inventory

    We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...

  3. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  4. Quantitative optical coherence tomography by maximum a-posteriori estimation of signal intensity

    NASA Astrophysics Data System (ADS)

    Chan, Aaron C.; Kurokawa, Kazuhiro; Makita, Shuichi; Hong, Young-Joo; Miyazawa, Arata; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    A maximum a-posteriori (MAP) estimator for signal amplitude of optical coherence tomography (OCT) is presented. This estimator provides an accurate and low bias estimation of the correct OCT signal amplitude even at very low signal-tonoise ratios. As a result, contrast improvement of retinal OCT images is demonstrated. In addition, this estimation method allows for an estimation reliability to be calculated. By combining the MAP estimator with a previously demonstrated attenuation imaging algorithm, we present attenuation coefficient images of the retina. From the reliability derived from the MAP image one can also determine which regions of the attenuation images are unreliable. From Jones matrix OCT data of the optic nerve head (ONH), we also demonstrate that combining MAP with polarization diversity (PD) OCT images can generate intensity images with fewer birefringence artifacts, resulting in better attenuation images. Analysis of the MAP intensity images shows higher image SNR than averaging.

  5. Accurate quantitative 13C NMR spectroscopy: repeatability over time of site-specific 13C isotope ratio determination.

    PubMed

    Caytan, Elsa; Botosoa, Eliot P; Silvestre, Virginie; Robins, Richard J; Akoka, Serge; Remaud, Gérald S

    2007-11-01

    The stability over time (repeatability) for the determination of site-specific 13C/12C ratios at natural abundance by quantitative 13C NMR spectroscopy has been tested on three probes: enriched bilabeled [1,2-13C2]ethanol; ethanol at natural abundance; and vanillin at natural abundance. It is shown in all three cases that the standard deviation for a series of measurements taken every 2-3 months over periods between 9 and 13 months is equal to or smaller than the standard deviation calculated from 5-10 replicate measurements made on a single sample. The precision which can be achieved using the present analytical 13C NMR protocol is higher than the prerequisite value of 1-2 per thousand for the determination of site-specific 13C/12C ratios at natural abundance (13C-SNIF-NMR). Hence, this technique permits the discrimination of very small variations in 13C/12C ratios between carbon positions, as found in biogenic natural products. This observed stability over time in 13C NMR spectroscopy indicates that further improvements in precision will depend primarily on improved signal-to-noise ratio. PMID:17900175

  6. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait

    PubMed Central

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y.; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A.; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  7. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait.

    PubMed

    Golestan Hashemi, Farahnaz Sadat; Rafii, Mohd Y; Ismail, Mohd Razi; Mohamed, Mahmud Tengku Muda; Rahim, Harun A; Latif, Mohammad Abdul; Aslani, Farzad

    2015-01-01

    When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL) analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP), the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance) was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM). These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice. PMID:26061689

  8. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR

    PubMed Central

    Zhang, Jing; Teixeira da Silva, Jaime A.; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446

  9. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus)?

    PubMed

    Logan, Corina J; Palmstrom, Christin R

    2015-01-01

    There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858

  10. A plan for accurate estimation of daily area-mean rainfall during the CaPE experiment

    NASA Technical Reports Server (NTRS)

    Duchon, Claude E.

    1992-01-01

    The Convection and Precipitation/Electrification (CaPE) experiment took place in east central Florida from 8 July to 18 August, 1991. There were five research themes associated with CaPE. In broad terms they are: investigation of the evolution of the electric field in convective clouds, determination of meteorological and electrical conditions associated with lightning, development of mesoscale numerical forecasts (2-12 hr) and nowcasts (less than 2 hr) of convective initiation and remote estimation of rainfall. It is the last theme coupled with numerous raingage and streamgage measurements, satellite and aircraft remote sensing, radiosondes and other meteorological measurements in the atmospheric boundary layer that provide the basis for determining the hydrologic cycle for the CaPE experiment area. The largest component of the hydrologic cycle in this region is rainfall. An accurate determination of daily area-mean rainfall is important in correctly modeling its apportionment into runoff, infiltration and evapotranspiration. In order to achieve this goal a research plan was devised and initial analysis begun. The overall research plan is discussed with special emphasis placed on the adjustment of radar rainfall estimates to raingage rainfall.

  11. Estimating Cell Concentration in Three-Dimensional Engineered Tissues using High Frequency Quantitative Ultrasound

    PubMed Central

    Mercado, Karla P.; Helguera, Marίa; Hocking, Denise C.; Dalecki, Diane

    2015-01-01

    Histology and biochemical assays are standard techniques for estimating cell concentration in engineered tissues. However, these techniques are destructive and cannot be used for longitudinal monitoring of engineered tissues during fabrication processes. The goal of this study was to develop high-frequency quantitative ultrasound techniques to nondestructively estimate cell concentration in three-dimensional (3-D) engineered tissue constructs. High-frequency ultrasound backscatter measurements were obtained from cell-embedded, 3-D agarose hydrogels. Two broadband single-element transducers (center frequencies of 30 and 38 MHz) were employed over the frequency range of 13 to 47 MHz. Agarose gels with cell concentrations ranging from 1×104 to 1×106 cells mL−1 were investigated. The integrated backscatter coefficient (IBC), a quantitative ultrasound spectral parameter, was calculated and used to estimate cell concentration. Accuracy and precision of this technique were analyzed by calculating the percent error and coefficient of variation of cell concentration estimates. The IBC increased linearly with increasing cell concentration. Axial and lateral dimensions of regions of interest that resulted in errors of less than 20% were determined. Images of cell concentration estimates were employed to visualize quantitatively regional differences in cell concentrations. This ultrasound technique provides the capability to rapidly quantify cell concentration within 3-D tissue constructs noninvasively and nondestructively. PMID:24627179

  12. Quantitative Risk reduction estimation Tool For Control Systems, Suggested Approach and Research Needs

    SciTech Connect

    Miles McQueen; Wayne Boyer; Mark Flynn; Sam Alessi

    2006-03-01

    For the past year we have applied a variety of risk assessment technologies to evaluate the risk to critical infrastructure from cyber attacks on control systems. More recently, we identified the need for a stand alone control system risk reduction estimation tool to provide owners and operators of control systems with a more useable, reliable, and credible method for managing the risks from cyber attack. Risk is defined as the probability of a successful attack times the value of the resulting loss, typically measured in lives and dollars. Qualitative and ad hoc techniques for measuring risk do not provide sufficient support for cost benefit analyses associated with cyber security mitigation actions. To address the need for better quantitative risk reduction models we surveyed previous quantitative risk assessment research; evaluated currently available tools; developed new quantitative techniques [17] [18]; implemented a prototype analysis tool to demonstrate how such a tool might be used; used the prototype to test a variety of underlying risk calculational engines (e.g. attack tree, attack graph); and identified technical and research needs. We concluded that significant gaps still exist and difficult research problems remain for quantitatively assessing the risk to control system components and networks, but that a useable quantitative risk reduction estimation tool is not beyond reach.

  13. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid–Base and Ligand Binding Equilibria of Aquacobalamin

    DOE PAGESBeta

    Johnston, Ryne C.; Zhou, Jing; Smith, Jeremy C.; Parks, Jerry M.

    2016-07-08

    In redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. Moreover, a major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co ligand binding equilibrium constants (Kon/off), pKas and reduction potentials for models of aquacobalaminmore » in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for CoIII, CoII, and CoI species, respectively, and the second model features saturation of each vacant axial coordination site on CoII and CoI species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co axial ligand binding, leading to substantial errors in predicted

  14. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    PubMed

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-01

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  15. 1NON-INVASIVE RADIOIODINE IMAGING FOR ACCURATE QUANTITATION OF NIS REPORTER GENE EXPRESSION IN TRANSPLANTED HEARTS

    PubMed Central

    Ricci, Davide; Mennander, Ari A; Pham, Linh D; Rao, Vinay P; Miyagi, Naoto; Byrne, Guerard W; Russell, Stephen J; McGregor, Christopher GA

    2008-01-01

    Objectives We studied the concordance of transgene expression in the transplanted heart using bicistronic adenoviral vector coding for a transgene of interest (human carcinoembryonic antigen: hCEA - beta human chorionic gonadotropin: βhCG) and for a marker imaging transgene (human sodium iodide symporter: hNIS). Methods Inbred Lewis rats were used for syngeneic heterotopic cardiac transplantation. Donor rat hearts were perfused ex vivo for 30 minutes prior to transplantation with University of Wisconsin (UW) solution (n=3), with 109 pfu/ml of adenovirus expressing hNIS (Ad-NIS; n=6), hNIS-hCEA (Ad-NIS-CEA; n=6) and hNIS-βhCG (Ad-NIS-CG; n=6). On post-operative day (POD) 5, 10, 15 all animals underwent micro-SPECT/CT imaging of the donor hearts after tail vein injection of 1000 μCi 123I and blood sample collection for hCEA and βhCG quantification. Results Significantly higher image intensity was noted in the hearts perfused with Ad-NIS (1.1±0.2; 0.9±0.07), Ad-NIS-CEA (1.2±0.3; 0.9±0.1) and Ad-NIS-CG (1.1±0.1; 0.9±0.1) compared to UW group (0.44±0.03; 0.47±0.06) on POD 5 and 10 (p<0.05). Serum levels of hCEA and βhCG increased in animals showing high cardiac 123I uptake, but not in those with lower uptake. Above this threshold, image intensities correlated well with serum levels of hCEA and βhCG (R2=0.99 and R2=0.96 respectively). Conclusions These data demonstrate that hNIS is an excellent reporter gene for the transplanted heart. The expression level of hNIS can be accurately and non-invasively monitored by serial radioisotopic single photon emission computed tomography (SPECT) imaging. High concordance has been demonstrated between imaging and soluble marker peptides at the maximum transgene expression on POD 5. PMID:17980613

  16. Estimating base rates of impairment in neuropsychological test batteries: a comparison of quantitative models.

    PubMed

    Decker, Scott L; Schneider, W Joel; Hale, James B

    2012-01-01

    Neuropsychologists frequently rely on a battery of neuropsychological tests which are normally distributed to determine impaired functioning. The statistical likelihood of Type I error in clinical decision-making is in part determined by the base rate of normative individuals obtaining atypical performance on neuropsychological tests. Base rates are most accurately obtained by co-normed measures, but this is rarely accomplished in neuropsychological testing. Several statistical methods have been proposed to estimate base rates for tests that are not co-normed. This study compared two statistical approaches (binomial and Monte Carlo models) used to estimate the base rates for flexible test batteries. The two approaches were compared against empirically derived base rates for a multitest co-normed battery of cognitive measures. Estimates were compared across a variety of conditions including age and different α levels (N =3,356). Monte Carlo R(2) estimates ranged from .980 to .997 across five different age groups, indicating a good fit. In contrast, the binomial model fit estimates ranged from 0.387 to 0.646. Results confirm that the binomial model is insufficient for estimating base rates because it does not take into account correlations among measures in a multitest battery. Although the Monte Carlo model produced more accurate results, minor biases occurred that are likely due to skewess and kurtosis of test variables. Implications for future research and applied practice are discussed. PMID:22172567

  17. [Estimation of quantitative proteinuria using a new dipstick in random urine samples].

    PubMed

    Morishita, Yoshiyuki; Kusano, Eiji; Umino, Tetsuo; Nemoto, Jun; Tanba, Kaichirou; Ando, Yasuhiro; Muto, Shigeaki; Asano, Yasushi

    2004-02-01

    Proteinuria is quantified for diagnostic and prognostic purposes and to assess responses to therapy. Methods used to assess urinary protein include 24-hour urine collection (24-Up) and determination of the ratio of protein to creatinine concentration (Up/Ucr) in simple voided urine samples (Up/Ucr quantitative method). However, these methods are costly and time consuming. The Multistix PRO 11 (Bayer Medical Co., Ltd., Tokyo, Japan) is a new urine dipstick that allows rapid measurement of Up/Ucr. Results obtained with the Multistix PRO 11 coincided well with those obtained with the 24-Up method (kappa = 0.68) and the Up/Ucr quantitative method (kappa = 0.75). However, Multistix PRO 11 did not accurately measure moderate to severe proteinuria (> or = 500 mg/g. Cr). Our findings suggest that Multistix PRO 11 is useful for the screening, assessment, and follow-up of mild proteinuria. PMID:15058105

  18. The Use of Multi-Sensor Quantitative Precipitation Estimates for Deriving Extreme Precipitation Frequencies with Application in Louisiana

    NASA Astrophysics Data System (ADS)

    El-Dardiry, Hisham Abd El-Kareem

    The Radar-based Quantitative Precipitation Estimates (QPE) is one of the NEXRAD products that are available in a high temporal and spatial resolution compared with gauges. Radar-based QPEs have been widely used in many hydrological and meteorological applications; however, a few studies have focused on using radar QPE products in deriving of Precipitation Frequency Estimates (PFE). Accurate and regionally specific information on PFE is critically needed for various water resources engineering planning and design purposes. This study focused first on examining the data quality of two main radar products, the near real-time Stage IV QPE product, and the post real-time RFC/MPE product. Assessment of the Stage IV product showed some alarming data artifacts that contaminate the identification of rainfall maxima. Based on the inter-comparison analysis of the two products, Stage IV and RFC/MPE, the latter was selected for the frequency analysis carried out throughout the study. The precipitation frequency analysis approach used in this study is based on fitting Generalized Extreme Value (GEV) distribution as a statistical model for the hydrologic extreme rainfall data that based on Annual Maximum Series (AMS) extracted from 11 years (2002-2012) over a domain covering Louisiana. The parameters of the GEV model are estimated using method of linear moments (L-moments). Two different approaches are suggested for estimating the precipitation frequencies; Pixel-Based approach, in which PFEs are estimated at each individual pixel and Region-Based approach in which a synthetic sample is generated at each pixel by using observations from surrounding pixels. The region-based technique outperforms the pixel based estimation when compared with results obtained by NOAA Atlas 14; however, the availability of only short record of observations and the underestimation of radar QPE for some extremes causes considerable reduction in precipitation frequencies in pixel-based and region

  19. Development of a new, robust and accurate, spectroscopic metric for scatterer size estimation in optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Pitris, Costas

    2016-03-01

    The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.

  20. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462

  1. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Subramanian, Swetha; Mast, T. Douglas

    2015-09-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  2. Microwave Quantitative NDE Technique for Dielectric Slab Thickness Estimation Using the Music Algorithm

    NASA Astrophysics Data System (ADS)

    Abou-Khousa, M. A.; Zoughi, R.

    2007-03-01

    Non-invasive monitoring of dielectric slab thickness is of great interest in various industrial applications. This paper focuses on estimating the thickness of dielectric slabs, and consequently monitoring their variations, utilizing wideband microwave signals and the MUtiple SIgnal Characterization (MUSIC) algorithm. The performance of the proposed approach is assessed by validating simulation results with laboratory experiments. The results clearly indicate the utility of this overall approach for accurate dielectric slab thickness evaluation.

  3. The quantitative estimation of the vulnerability of brick and concrete building impacted by debris flow

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Guo, Z. X.; Wang, D.; Qian, H.

    2015-08-01

    There is little historic data about the vulnerability of the damage elements in debris flow disaster in China. Therefore, it is difficult to estimate the vulnerability of debris flow quantitatively. This paper was devoted to the research of the vulnerability of brick and concrete building impacted by debris flow which widely existed in affected area. Under two assumptions, several prototype walls of brick and concrete were constructed to simulate the damaged structures in debris flow while the iron spheres were taken as the substitute of debris flow. The failure criterion of brick and concrete building was proposed with referring to the structure standards (brick and concrete) and the damage pattern in debris flow. The quantitatively estimation of vulnerability of brick and concrete building was finally established based on Fuzzy mathematics and the proposed failure criterion. The results show that the maximum impact bending moment is the best fit to be the disaster-causing factor in vulnerability curve and formula. The experiments in this paper is the preliminary research on the vulnerability of the element impacted by debris flow. The method and conclusion will be useful for the quantitative estimation of the vulnerability in debris flow and also can be referred in other types of the vulnerable elements research.

  4. Quantitative precipitation estimation for an X-band weather radar network

    NASA Astrophysics Data System (ADS)

    Chen, Haonan

    Currently, the Next Generation (NEXRAD) radar network, a joint effort of the U.S. Department of Commerce (DOC), Defense (DOD), and Transportation (DOT), provides radar data with updates every five-six minutes across the United States. This network consists of about 160 S-band (2.7 to 3.0 GHz) radar sites. At the maximum NEXRAD range of 230 km, the 0.5 degree radar beam is about 5.4 km above ground level (AGL) because of the effect of earth curvature. Consequently, much of the lower atmosphere (1-3 km AGL) cannot be observed by the NEXRAD. To overcome the fundamental coverage limitations of today's weather surveillance radars, and improve the spatial and temporal resolution issues, the National Science Foundation Engineering Center (NSF-ERC) for Collaborative Adaptive Sensing of the Atmosphere (CASA) was founded to revolutionize weather sensing in the lower atmosphere by deploying a dense network of shorter-range, low-power X-band dual-polarization radars. The distributed CASA radars are operating collaboratively to adapt the changing atmospheric conditions. Accomplishments and breakthroughs after five years operation have demonstrated the success of CASA program. Accurate radar quantitative precipitation estimation (QPE) has been pursued since the beginning of weather radar. For certain disaster prevention applications such as flash flood and landslide forecasting, the rain rate must however be measured at a high spatial and temporal resolution. To this end, high-resolution radar QPE is one of the major research activities conducted by the CASA community. A radar specific differential propagation phase (Kdp)-based QPE methodology has been developed in CASA. Unlike the rainfall estimation based on the power terms such as radar reflectivity (Z) and differential reflectivity (Zdr), Kdp-based QPE is less sensitive to the path attenuation, drop size distribution (DSD), and radar calibration errors. The CASA Kdp-based QPE system is also immune to the partial beam

  5. Radar Based Probabilistic Quantitative Precipitation Estimation: First Results of Large Sample Data Analysis

    NASA Astrophysics Data System (ADS)

    Ciach, G. J.; Krajewski, W. F.; Villarini, G.

    2005-05-01

    Large uncertainties in the operational precipitation estimates produced by the U.S. national network of WSR-88D radars are well-acknowledged. However, quantitative information about these uncertainties is not operationally available. In an effort to fill this gap, the U.S. National Weather Service (NWS) is supporting the development of a probabilistic approach to the radar precipitation estimation. The probabilistic quantitative precipitation estimation (PQPE) methodology that was selected for this development is based on the empirically-based modeling of the functional-statistical error structure in the operational WSR-88D precipitation products under different conditions. Our first goal is to deliver a realistic parameterization of the probabilistic error model describing its dependences on the radar-estimated precipitation value, distance from the radar, season, spatiotemporal averaging scale, and the setup of the precipitation processing system (PPS). In the long-term perspective, when large samples of relevant data are available, we will extend the model to include the dependences on different types of precipitation estimates (e.g. polarimeteric and multi-sensor), geographic locations and climatic regimes. At this stage of the PQPE project, we organized a 6-year-long sample of the Level II data from the Oklahoma City radar station (KTLX), and processed it with the Built 4 of the PPS that is currently used in the NWS operations. This first set of operational products was generated with the standard setup of the PPS parameters. The radar estimates are completed with the corresponding raingauge data from the Oklahoma Mesonet, the ARS Little Washita Micronet and the EVAC PicoNet covering different spatial scales. The raingauge data are used as a ground reference (GR) to estimate the required uncertainty characteristics in the radar precipitation products. In this presentation, we describe the first results of the large-sample uncertainty analysis of the products

  6. Estimation of undiscovered deposits in quantitative mineral resource assessments-examples from Venezuela and Puerto Rico

    USGS Publications Warehouse

    Cox, D.P.

    1993-01-01

    Quantitative mineral resource assessments used by the United States Geological Survey are based on deposit models. These assessments consist of three parts: (1) selecting appropriate deposit models and delineating on maps areas permissive for each type of deposit; (2) constructing a grade-tonnage model for each deposit model; and (3) estimating the number of undiscovered deposits of each type. In this article, I focus on the estimation of undiscovered deposits using two methods: the deposit density method and the target counting method. In the deposit density method, estimates are made by analogy with well-explored areas that are geologically similar to the study area and that contain a known density of deposits per unit area. The deposit density method is useful for regions where there is little or no data. This method was used to estimate undiscovered low-sulfide gold-quartz vein deposits in Venezuela. Estimates can also be made by counting targets such as mineral occurrences, geophysical or geochemical anomalies, or exploration "plays" and by assigning to each target a probability that it represents an undiscovered deposit that is a member of the grade-tonnage distribution. This method is useful in areas where detailed geological, geophysical, geochemical, and mineral occurrence data exist. Using this method, porphyry copper-gold deposits were estimated in Puerto Rico. ?? 1993 Oxford University Press.

  7. The new approach of polarimetric attenuation correction for improving radar quantitative precipitation estimation(QPE)

    NASA Astrophysics Data System (ADS)

    Gu, Ji-Young; Suk, Mi-Kyung; Nam, Kyung-Yeub; Ko, Jeong-Seok; Ryzhkov, Alexander

    2016-04-01

    To obtain high-quality radar quantitative precipitation estimation data, reliable radar calibration and efficient attenuation correction are very important. Because microwave radiation at shorter wavelength experiences strong attenuation in precipitation, accounting for this attenuation is the essential work at shorter wavelength radar. In this study, the performance of different attenuation/differential attenuation correction schemes at C band is tested for two strong rain events which occurred in central Oklahoma. And also, a new attenuation correction scheme (combination of self-consistency and hot-spot concept methodology) that separates relative contributions of strong convective cells and the rest of the storm to the path-integrated total and differential attenuation is among the algorithms explored. A quantitative use of weather radar measurement such as rainfall estimation relies on the reliable attenuation correction. We examined the impact of attenuation correction on estimates of rainfall in heavy rain events by using cross-checking with S-band radar measurements which are much less affected by attenuation and compared the storm rain totals obtained from the corrected Z and KDP and rain gages in these cases. This new approach can be utilized at shorter wavelength radars efficiently. Therefore, it is very useful to Weather Radar Center of Korea Meteorological Administration preparing X-band research dual Pol radar network.

  8. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi

  9. Quantitative estimation of hemorrhage in chronic subdural hematoma using the /sup 51/Cr erythrocyte labeling method

    SciTech Connect

    Ito, H.; Yamamoto, S.; Saito, K.; Ikeda, K.; Hisada, K.

    1987-06-01

    Red cell survival studies using an infusion of chromium-51-labeled erythrocytes were performed to quantitatively estimate hemorrhage in the chronic subdural hematoma cavity of 50 patients. The amount of hemorrhage was determined during craniotomy. Between 6 and 24 hours after infusion of the labeled red cells, hemorrhage accounted for a mean of 6.7% of the hematoma content, indicating continuous or intermittent hemorrhage into the cavity. The clinical state of the patients and the density of the chronic subdural hematoma on computerized tomography scans were related to the amount of hemorrhage. Chronic subdural hematomas with a greater amount of hemorrhage frequently consisted of clots rather than fluid.

  10. Quantitative Cyber Risk Reduction Estimation Methodology for a Small Scada Control System

    SciTech Connect

    Miles A. McQueen; Wayne F. Boyer; Mark A. Flynn; George A. Beitel

    2006-01-01

    We propose a new methodology for obtaining a quick quantitative measurement of the risk reduction achieved when a control system is modified with the intent to improve cyber security defense against external attackers. The proposed methodology employs a directed graph called a compromise graph, where the nodes represent stages of a potential attack and the edges represent the expected time-to-compromise for differing attacker skill levels. Time-to-compromise is modeled as a function of known vulnerabilities and attacker skill level. The methodology was used to calculate risk reduction estimates for a specific SCADA system and for a specific set of control system security remedial actions. Despite an 86% reduction in the total number of vulnerabilities, the estimated time-to-compromise was increased only by about 3 to 30% depending on target and attacker skill level.

  11. Quantitative agent-based firm dynamics simulation with parameters estimated by financial and transaction data analysis

    NASA Astrophysics Data System (ADS)

    Ikeda, Yuichi; Souma, Wataru; Aoyama, Hideaki; Iyetomi, Hiroshi; Fujiwara, Yoshi; Kaizoji, Taisei

    2007-03-01

    Firm dynamics on a transaction network is considered from the standpoint of econophysics, agent-based simulations, and game theory. In this model, interacting firms rationally invest in a production facility to maximize net present value. We estimate parameters used in the model through empirical analysis of financial and transaction data. We propose two different methods ( analytical method and regression method) to obtain an interaction matrix of firms. On a subset of a real transaction network, we simulate firm's revenue, cost, and fixed asset, which is the accumulated investment for the production facility. The simulation reproduces the quantitative behavior of past revenues and costs within a standard error when we use the interaction matrix estimated by the regression method, in which only transaction pairs are taken into account. Furthermore, the simulation qualitatively reproduces past data of fixed assets.

  12. Estimation of crack and damage progression in concrete by quantitative acoustic emission analysis

    SciTech Connect

    Ohtsu, Masayasu

    1999-05-01

    The kinematics of cracking can be represented by the moment tensor. To distinguish moment tensor components from acoustic emission waveforms, the SiGMA (simplified Green`s functions for moment tensor analysis) procedure was developed. By applying the procedure to bending tests of notched beams, cracks in the fracture process zone of cementitious materials can be identified by kinematic means. In addition to cracks, estimation of the damage level in structural concrete is also conducted, based on acoustic emission activity of a concrete sample under compression. Depending on the damage resulting from existing microcracks, acoustic emission generated behavior is quantitatively estimated by the rate process analysis. The damage mechanics are introduced to quantify the degree of damage. Determining the current damage level using acoustic emission without information on undamaged concrete is attempted by correlating the damage value with the rate process.

  13. On the use of radar-based quantitative precipitation estimates for precipitation frequency analysis

    NASA Astrophysics Data System (ADS)

    Eldardiry, Hisham; Habib, Emad; Zhang, Yu

    2015-12-01

    The high spatio-temporal resolutions of radar-based multi-sensor Quantitative Precipitation Estimates (QPEs) makes them a potential complement to the gauge records for engineering design purposes, such as precipitation frequency analysis. The current study investigates three fundamental issues that arise when radar-based QPE products are used in frequency analysis: (a) Effect of sample size due to the typically short records of radar products; (b) Effect of uncertainties present in radar-rainfall estimation algorithms; and (c) Effect of the frequency estimation approach adopted. The study uses a 13-year dataset of hourly, 4 × 4 km2 radar-based over a domain that covers Louisiana, USA. Data-based investigations, as well as synthetic simulations, are performed to quantify the uncertainties associated with the radar-based derived frequencies, and to gain insight into the relative contributions of short record lengths and those from conditional biases in the radar product. Three regional estimation procedures were tested and the results indicate the sensitivity of the radar frequency estimates to the selection of the estimation approach and the impact on the uncertainties of the derived extreme quantiles. The simulation experiments revealed that the relatively short radar records explained the majority of the uncertainty associated with the radar-based quantiles; however, they did not account for any tangible contribution to the systematic underestimation observed between radar- and gauge-based frequency estimates. This underestimation was mostly attributable to the conditional bias inherent in the radar product. Addressing such key outstanding problems in radar-rainfall products is necessary before they can be fully and reliably used for frequency analysis applications.

  14. Tandem Mass Spectrometry Measurement of the Collision Products of Carbamate Anions Derived from CO2 Capture Sorbents: Paving the Way for Accurate Quantitation

    NASA Astrophysics Data System (ADS)

    Jackson, Phil; Fisher, Keith J.; Attalla, Moetaz Ibrahim

    2011-08-01

    The reaction between CO2 and aqueous amines to produce a charged carbamate product plays a crucial role in post-combustion capture chemistry when primary and secondary amines are used. In this paper, we report the low energy negative-ion CID results for several anionic carbamates derived from primary and secondary amines commonly used as post-combustion capture solvents. The study was performed using the modern equivalent of a triple quadrupole instrument equipped with a T-wave collision cell. Deuterium labeling of 2-aminoethanol (1,1,2,2,-d4-2-aminoethanol) and computations at the M06-2X/6-311++G(d,p) level were used to confirm the identity of the fragmentation products for 2-hydroxyethylcarbamate (derived from 2-aminoethanol), in particular the ions CN-, NCO- and facile neutral losses of CO2 and water; there is precedent for the latter in condensed phase isocyanate chemistry. The fragmentations of 2-hydroxyethylcarbamate were generalized for carbamate anions derived from other capture amines, including ethylenediamine, diethanolamine, and piperazine. We also report unequivocal evidence for the existence of carbamate anions derived from sterically hindered amines ( Tris(2-hydroxymethyl)aminomethane and 2-methyl-2-aminopropanol). For the suite of carbamates investigated, diagnostic losses include the decarboxylation product (-CO2, 44 mass units), loss of 46 mass units and the fragments NCO- ( m/z 42) and CN- ( m/z 26). We also report low energy CID results for the dicarbamate dianion (-O2CNHC2H4NHCO{2/-}) commonly encountered in CO2 capture solution utilizing ethylenediamine. Finally, we demonstrate a promising ion chromatography-MS based procedure for the separation and quantitation of aqueous anionic carbamates, which is based on the reported CID findings. The availability of accurate quantitation methods for ionic CO2 capture products could lead to dynamic operational tuning of CO2 capture-plants and, thus, cost-savings via real-time manipulation of solvent

  15. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    SciTech Connect

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  16. Accurate and easy-to-use assessment of contiguous DNA methylation sites based on proportion competitive quantitative-PCR and lateral flow nucleic acid biosensor.

    PubMed

    Xu, Wentao; Cheng, Nan; Huang, Kunlun; Lin, Yuehe; Wang, Chenguang; Xu, Yuancong; Zhu, Longjiao; Du, Dan; Luo, Yunbo

    2016-06-15

    Many types of diagnostic technologies have been reported for DNA methylation, but they require a standard curve for quantification or only show moderate accuracy. Moreover, most technologies have difficulty providing information on the level of methylation at specific contiguous multi-sites, not to mention easy-to-use detection to eliminate labor-intensive procedures. We have addressed these limitations and report here a cascade strategy that combines proportion competitive quantitative PCR (PCQ-PCR) and lateral flow nucleic acid biosensor (LFNAB), resulting in accurate and easy-to-use assessment. The P16 gene with specific multi-methylated sites, a well-studied tumor suppressor gene, was used as the target DNA sequence model. First, PCQ-PCR provided amplification products with an accurate proportion of multi-methylated sites following the principle of proportionality, and double-labeled duplex DNA was synthesized. Then, a LFNAB strategy was further employed for amplified signal detection via immune affinity recognition, and the exact level of site-specific methylation could be determined by the relative intensity of the test line and internal reference line. This combination resulted in all recoveries being greater than 94%, which are pretty satisfactory recoveries in DNA methylation assessment. Moreover, the developed cascades show significantly high usability as a simple, sensitive, and low-cost tool. Therefore, as a universal platform for sensing systems for the detection of contiguous multi-sites of DNA methylation without external standards and expensive instrumentation, this PCQ-PCR-LFNAB cascade method shows great promise for the point-of-care diagnosis of cancer risk and therapeutics. PMID:26914373

  17. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities.

    PubMed

    Mansfield, Theodore J; MacDonald Gibson, Jacqueline

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7-30.6), 0.6 (0.3-0.9), and 4.7 (2.1-7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832

  18. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities

    PubMed Central

    Mansfield, Theodore J.; MacDonald Gibson, Jacqueline

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7–30.6), 0.6 (0.3–0.9), and 4.7 (2.1–7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832

  19. Estimation of the number of fluorescent end-members for quantitative analysis of multispectral FLIM data

    PubMed Central

    Gutierrez-Navarro, Omar; Campos-Delgado, Daniel U.; Arce-Santana, Edgar R.; Maitland, Kristen C.; Cheng, Shuna; Jabbour, Joey; Malik, Bilal; Cuenca, Rodrigo; Jo, Javier A.

    2014-01-01

    Multispectral fluorescence lifetime imaging (m-FLIM) can potentially allow identifying the endogenous fluorophores present in biological tissue. Quantitative description of such data requires estimating the number of components in the sample, their characteristic fluorescent decays, and their relative contributions or abundances. Unfortunately, this inverse problem usually requires prior knowledge about the data, which is seldom available in biomedical applications. This work presents a new methodology to estimate the number of potential endogenous fluorophores present in biological tissue samples from time-domain m-FLIM data. Furthermore, a completely blind linear unmixing algorithm is proposed. The method was validated using both synthetic and experimental m-FLIM data. The experimental m-FLIM data include in-vivo measurements from healthy and cancerous hamster cheek-pouch epithelial tissue, and ex-vivo measurements from human coronary atherosclerotic plaques. The analysis of m-FLIM data from in-vivo hamster oral mucosa identified healthy from precancerous lesions, based on the relative concentration of their characteristic fluorophores. The algorithm also provided a better description of atherosclerotic plaques in term of their endogenous fluorophores. These results demonstrate the potential of this methodology to provide quantitative description of tissue biochemical composition. PMID:24921344

  20. The quantitative estimation of the vulnerability of brick and concrete wall impacted by an experimental boulder

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Guo, Z. X.; Wang, D.; Qian, H.

    2016-02-01

    There is little historic data about the vulnerability of damaged elements due to debris flow events in China. Therefore, it is difficult to quantitatively estimate the vulnerable elements suffered by debris flows. This paper is devoted to the research of the vulnerability of brick and concrete walls impacted by debris flows. An experimental boulder (an iron sphere) was applied to be the substitute of debris flow since it can produce similar shape impulse load on elements as debris flow. Several walls made of brick and concrete were constructed in prototype dimensions to physically simulate the damaged structures in debris flows. The maximum impact force was measured, and the damage conditions of the elements (including cracks and displacements) were collected, described and compared. The failure criterion of brick and concrete wall was proposed with reference to the structure characteristics as well as the damage pattern caused by debris flows. The quantitative estimation of the vulnerability of brick and concrete wall was finally established based on fuzzy mathematics and the proposed failure criterion. Momentum, maximum impact force and maximum impact bending moment were compared to be the best candidate for disaster intensity index. The results show that the maximum impact bending moment seems to be most suitable for the disaster intensity index in establishing vulnerability curve and formula.

  1. Uncertainty in Quantitative Precipitation Estimates and Forecasts in a Hydrologic Modeling Context (Invited)

    NASA Astrophysics Data System (ADS)

    Gourley, J. J.; Kirstetter, P.; Hong, Y.; Hardy, J.; Flamig, Z.

    2013-12-01

    This study presents a methodology to account for uncertainty in radar-based rainfall rate estimation using NOAA/NSSL's Multi-Radar Multisensor (MRMS) products. The focus of the study in on flood forecasting, including flash floods, in ungauged catchments throughout the conterminous US. An error model is used to derive probability distributions of rainfall rates that explicitly accounts for rain typology and uncertainty in the reflectivity-to-rainfall relationships. This approach preserves the fine space/time sampling properties (2 min/1 km) of the radar and conditions probabilistic quantitative precipitation estimates (PQPE) on the rain rate and rainfall type. Uncertainty in rainfall amplitude is the primary factor that is accounted for in the PQPE development. Additional uncertainties due to rainfall structures, locations, and timing must be considered when using quantitative precipitation forecast (QPF) products as forcing to a hydrologic model. A new method will be presented that shows how QPF ensembles are used in a hydrologic modeling context to derive probabilistic flood forecast products. This method considers the forecast rainfall intensity and morphology superimposed on pre-existing hydrologic conditions to identify basin scales that are most at risk.

  2. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Eto, Shuzo; Matsuo, Toyofumi; Matsumura, Takuro; Fujii, Takashi; Tanaka, Masayoshi Y.

    2014-11-01

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm2 within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously.

  3. Estimating effects of a single gene and polygenes on quantitative traits from a diallel design.

    PubMed

    Lou, Xiang-Yang; Yang, Mark C K

    2006-01-01

    A genetic model is developed with additive and dominance effects of a single gene and polygenes as well as general and specific reciprocal effects for the progeny from a diallel mating design. The methods of ANOVA, minimum norm quadratic unbiased estimation (MINQUE), restricted maximum likelihood estimation (REML), and maximum likelihood estimation (ML) are suggested for estimating variance components, and the methods of generalized least squares (GLS) and ordinary least squares (OLS) for fixed effects, while best linear unbiased prediction, linear unbiased prediction (LUP), and adjusted unbiased prediction are suggested for analyzing random effects. Monte Carlo simulations were conducted to evaluate the unbiasedness and efficiency of statistical methods involving two diallel designs with commonly used sample sizes, 6 and 8 parents, with no and missing crosses, respectively. Simulation results show that GLS and OLS are almost equally efficient for estimation of fixed effects, while MINQUE (1) and REML are better estimators of the variance components and LUP is most practical method for prediction of random effects. Data from a Drosophila melanogaster experiment (Gilbert 1985a, Theor appl Genet 69:625-629) were used as a working example to demonstrate the statistical analysis. The new methodology is also applicable to screening candidate gene(s) and to other mating designs with multiple parents, such as nested (NC Design I) and factorial (NC Design II) designs. Moreover, this methodology can serve as a guide to develop new methods for detecting indiscernible major genes and mapping quantitative trait loci based on mixture distribution theory. The computer program for the methods suggested in this article is freely available from the authors. PMID:17028974

  4. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography

    PubMed Central

    Zamir, Ehud; Kong, George Y.X.; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-01-01

    Purpose We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Methods Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance (“E”) and (2) lateral photographic temporal limbus to cornea distance (“Z”). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. Results A strong linear correlation was found between EZR and ACD, R = −0.91, R2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was −0.013 mm (range −0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Conclusions Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. Translational Relevance EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations. PMID:27540496

  5. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    SciTech Connect

    Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory; Wirtzfeld, Lauren; Wright, Frances C.

    2014-01-15

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  6. Quantitative estimation of density variation in high-speed flows through inversion of the measured wavefront distortion

    NASA Astrophysics Data System (ADS)

    Medhi, Biswajit; Hegde, Gopalkrishna Mahadeva; Reddy, Kalidevapura Polareddy Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2014-12-01

    A simple method employing an optical probe is presented to measure density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a random dot pattern. Local slopes of the distorted wavefront are obtained from shifts of the dots in the pattern. Local shifts in the dots are accurately measured by cross-correlating local shifted shadows with the corresponding unshifted originals. The measured slopes are suitably unwrapped by using a discrete cosine transform based phase unwrapping procedure and also through iterative procedures. The unwrapped phase information is used in an iterative scheme for a full quantitative recovery of density distribution in the shock around the model through refraction tomographic inversion. Hypersonic flow field parameters around a missile shaped body at a free-stream Mach number of 5.8 measured using this technique are compared with the numerically estimated values.

  7. Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery

    NASA Astrophysics Data System (ADS)

    Woods, B. K.; Wei, L. H.; Connor, T. C.

    2014-12-01

    With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.

  8. Uncertainties in Surface Runoff Forecasts Driven by Probabilistic Quantitative Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    Ntelekos, A. A.; Ciach, G. J.; Georgakakos, K. P.; Krajewski, W. F.

    2004-05-01

    This work focuses on several aspects of the ensemble flood forecasting with the embedded input and model uncertainties. In most of the short term forecasting, hydrologic models apply the input rainfall estimates assuming that they are error-free. For example, this is the case with the operational use of the Sacramento--Soil Moisture Accounting (SAC-SMA) model at the US National Weather Service (NWS) River Forecast Centers (RFC's). We apply an analytical approximation of the upper soil zone equations in the SAC-SMA model to study the propagation of the uncertainties in rainfall estimates into runoff generation. The ensembles of the rainfall estimates are produced by randomizing the radar-rainfall arrays costructed from the WSR-88D data. These ensembles are the specific outcomes of a general probabilistic quantitative precipitation estimation (PQPE) procedure currently developed by the University of Iowa for the NWS. The parameters of the rainfall uncertainty generator describe the conditional distributions of the error process and its spatiotemporal dependences. This investigation is performed using two different uncertainty schemes. In the first scenario, only the errors in rainfall estimates are assumed. Here, the parameter values of the SAC-SMA model are fixed and based on the data from a watershed located within the Illinois River basin in Oklahoma. In the second scenario, nominal uncertainties in the SAC-SMA model parameters are added. Our study aims to identify those characteristics of the radar-rainfall error process that are mostly responsible for the uncertainty in surface runoff production by the operational hydrologic models.

  9. Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits

    PubMed Central

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L.

    2013-01-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays. PMID:23737753

  10. Spectral Feature Analysis for Quantitative Estimation of Cyanobacteria Chlorophyll-A

    NASA Astrophysics Data System (ADS)

    Lin, Yi; Ye, Zhanglin; Zhang, Yugan; Yu, Jie

    2016-06-01

    In recent years, lake eutrophication caused a large of Cyanobacteria bloom which not only brought serious ecological disaster but also restricted the sustainable development of regional economy in our country. Chlorophyll-a is a very important environmental factor to monitor water quality, especially for lake eutrophication. Remote sensed technique has been widely utilized in estimating the concentration of chlorophyll-a by different kind of vegetation indices and monitoring its distribution in lakes, rivers or along coastline. For each vegetation index, its quantitative estimation accuracy for different satellite data might change since there might be a discrepancy of spectral resolution and channel center between different satellites. The purpose this paper is to analyze the spectral feature of chlorophyll-a with hyperspectral data (totally 651 bands) and use the result to choose the optimal band combination for different satellites. The analysis method developed here in this study could be useful to recognize and monitor cyanobacteria bloom automatically and accrately. In our experiment, the reflectance (from 350nm to 1000nm) of wild cyanobacteria in different consistency (from 0 to 1362.11ug/L) and the corresponding chlorophyll-a concentration were measured simultaneously. Two kinds of hyperspectral vegetation indices were applied in this study: simple ratio (SR) and narrow band normalized difference vegetation index (NDVI), both of which consists of any two bands in the entire 651 narrow bands. Then multivariate statistical analysis was used to construct the linear, power and exponential models. After analyzing the correlation between chlorophyll-a and single band reflectance, SR, NDVI respetively, the optimal spectral index for quantitative estimation of cyanobacteria chlorophyll-a, as well corresponding central wavelength and band width were extracted. Results show that: Under the condition of water disturbance, SR and NDVI are both suitable for quantitative

  11. Quantitative estimation of surface ocean productivity and bottom water oxygen concentration using benthic foraminifera

    NASA Astrophysics Data System (ADS)

    Loubere, Paul

    1994-10-01

    An electronic supplement of this material may be obtained on adiskette or Anonymous FTP from KOSMOS.AGU.ORG. (LOGIN toAGU's FTP account using ANONYMOUS as the usemame andGUEST as the password. Go to the right directory by typing CDAPEND. Type LS to see what files are available. Type GET and thename of the file to get it. Finally, type EXIT to leave the system.)(Paper 94PA01624, Quantitative estimation of surface oceanproductivity and bottom water concentration using benthicforaminifera, by P. Loubere). Diskette may be ordered from AmericanGeophysical Union, 2000 Florida Avenue, N.W., Washington, DC20009; $15.00. Payment must accompany order.Quantitative estimation of surface ocean productivity and bottom water oxygen concentration with benthic foraminifera was attempted using 70 samples from equatorial and North Pacific surface sediments. These samples come from a well defined depth range in the ocean, between 2200 and 3200 m, so that depth related factors do not interfere with the estimation. Samples were selected so that foraminifera were well preserved in the sediments and temperature and salinity were nearly uniform (T = 1.5° C; S = 34.6‰). The sample set was also assembled so as to minimize the correlation often seen between surface ocean productivity and bottom water oxygen values (r² = 0.23 for prediction purposes in this case). This procedure reduced the chances of spurious results due to correlations between the environmental variables. The samples encompass a range of productivities from about 25 to >300 gC m-2 yr-1, and a bottom water oxygen range from 1.8 to 3.5 ml/L. Benthic foraminiferal assemblages were quantified using the >62 µm fraction of the sediments and 46 taxon categories. MANOVA multivariate regression was used to project the faunal matrix onto the two environmental dimensions using published values for productivity and bottom water oxygen to calibrate this operation. The success of this regression was measured with the multivariate r

  12. Evaluation of a rapid method for the quantitative estimation of coliforms in meat by impedimetric procedures.

    PubMed Central

    Martins, S B; Selby, M J

    1980-01-01

    A 24-h instrumental procedure is described for the quantitative estimation of coliforms in ground meat. The method is simple and rapid, and it requires but a single sample dilution and four replicates. The data are recorded automatically and can be used to estimate coliforms in the range of 100 to 10,000 organisms per g. The procedure is an impedance detection time (IDT) method using a new medium, tested against 131 stock cultures, that markedly enhances the impedance response of gram-negative organisms, and it is selective for coliforms. Seventy samples of ground beef were analyzed for coliforms by the IDT method and the conventional three-dilution, two-step most-probable-number test tube procedure. Seventy-nine percent of the impedimetric estimates fell within the 95% confidence limits of the most-probable-number values. This corresponds to the criteria used to evaluate other coliform tests, with the added advantage of a single dilution and more rapid results. PMID:6992712

  13. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding

    NASA Astrophysics Data System (ADS)

    Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.

    2009-12-01

    The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.

  14. A method for estimating the effective number of loci affecting a quantitative character.

    PubMed

    Slatkin, Montgomery

    2013-11-01

    A likelihood method is introduced that jointly estimates the number of loci and the additive effect of alleles that account for the genetic variance of a normally distributed quantitative character in a randomly mating population. The method assumes that measurements of the character are available from one or both parents and an arbitrary number of full siblings. The method uses the fact, first recognized by Karl Pearson in 1904, that the variance of a character among offspring depends on both the parental phenotypes and on the number of loci. Simulations show that the method performs well provided that data from a sufficient number of families (on the order of thousands) are available. This method assumes that the loci are in Hardy-Weinberg and linkage equilibrium but does not assume anything about the linkage relationships. It performs equally well if all loci are on the same non-recombining chromosome provided they are in linkage equilibrium. The method can be adapted to take account of loci already identified as being associated with the character of interest. In that case, the method estimates the number of loci not already known to affect the character. The method applied to measurements of crown-rump length in 281 family trios in a captive colony of African green monkeys (Chlorocebus aethiopus sabaeus) estimates the number of loci to be 112 and the additive effect to be 0.26 cm. A parametric bootstrap analysis shows that a rough confidence interval has a lower bound of 14 loci. PMID:23973416

  15. The overall impact of testing on medical student learning: quantitative estimation of consequential validity.

    PubMed

    Kreiter, Clarence D; Green, Joseph; Lenoch, Susan; Saiki, Takuya

    2013-10-01

    Given medical education's longstanding emphasis on assessment, it seems prudent to evaluate whether our current research and development focus on testing makes sense. Since any intervention within medical education must ultimately be evaluated based upon its impact on student learning, this report seeks to provide a quantitative accounting of the learning gains attained through educational assessments. To approach this question, we estimate achieved learning within a medical school environment that optimally utilizes educational assessments. We compare this estimate to learning that might be expected in a medical school that employs no educational assessments. Effect sizes are used to estimate testing's total impact on learning by summarizing three effects; the direct effect, the indirect effect, and the selection effect. The literature is far from complete, but the available evidence strongly suggests that each of these effects is large and the net cumulative impact on learning in medical education is over two standard deviations. While additional evidence is required, the current literature shows that testing within medical education makes a strong positive contribution to learning. PMID:22886140

  16. Estimating Accurate Relative Spacecraft Angular Position from DSN VLBI Phases Using X-Band Telemetry or DOR Tones

    NASA Technical Reports Server (NTRS)

    Bagri, Durgadas S.; Majid, Walid

    2009-01-01

    At present spacecraft angular position with Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometer (VLBI) phase measurements employing differential one way ranging (DOR) tones. As an alternative to this approach, we propose estimating position of a spacecraft to half a fringe cycle accuracy using time variations between measured and calculated phases as the Earth rotates using DSN VLBI baseline(s). Combining fringe location of the target with the phase allows high accuracy for spacecraft angular position estimate. This can be achieved using telemetry signals of at least 4-8 MSamples/sec data rate or DOR tones.

  17. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  18. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    PubMed

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  19. Effects of shortened acquisition time on accuracy and precision of quantitative estimates of organ activity1

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Purpose: Quantitative estimation of in vivo organ uptake is an essential part of treatment planning for targeted radionuclide therapy. This usually involves the use of planar or SPECT scans with acquisition times chosen based more on image quality considerations rather than the minimum needed for precise quantification. In previous simulation studies at clinical count levels (185 MBq 111In), the authors observed larger variations in accuracy of organ activity estimates resulting from anatomical and uptake differences than statistical noise. This suggests that it is possible to reduce the acquisition time without substantially increasing the variation in accuracy. Methods: To test this hypothesis, the authors compared the accuracy and variation in accuracy of organ activity estimates obtained from planar and SPECT scans at various count levels. A simulated phantom population with realistic variations in anatomy and biodistribution was used to model variability in a patient population. Planar and SPECT projections were simulated using previously validated Monte Carlo simulation tools. The authors simulated the projections at count levels approximately corresponding to 1.5–30 min of total acquisition time. The projections were processed using previously described quantitative SPECT (QSPECT) and planar (QPlanar) methods. The QSPECT method was based on the OS-EM algorithm with compensations for attenuation, scatter, and collimator-detector response. The QPlanar method is based on the ML-EM algorithm using the same model-based compensation for all the image degrading effects as the QSPECT method. The volumes of interests (VOIs) were defined based on the true organ configuration in the phantoms. The errors in organ activity estimates from different count levels and processing methods were compared in terms of mean and standard deviation over the simulated phantom population. Results: There was little degradation in quantitative reliability when the acquisition time was

  20. Quantitative estimate of the effect of cellulase components during degradation of cotton fibers.

    PubMed

    Wang, Lu-Shan; Zhang, Yu-Zhong; Yang, Hong; Gao, Pei-Ji

    2004-03-15

    A comprehensive mechanistic kinetic model for enzymatic degradation of cotton fibers has been established based on a complete factorial experiment in combination with multivariate stepwise regression analysis. The analysis of the statistical parameter value in the model suggests that the enzymatic degradation of cotton fiber is a progressive and heterogeneous process that includes, at least, two courses that occur sequentially and then progress in parallel. Cellulose fibers were first depolymerized or solubilized by the synergism between cellobiohydrolase I (CBHI) and endoglucanase I (EGI), and then the oligomers obtained were randomly hydrolyzed into glucose by EGI and beta-glucosidase. The proposed model can be applied to the quantitative estimation of the effects of three cellulase components, CBHI, EGI, and beta-glucosidase separately, or in combination during the entire process of cellulose degradation. The validity of the proposed model has been verified by a filter paper activity assay. Its other applicability was also discussed. PMID:14980825

  1. Estimation of Low Quantity Genes: A Hierarchical Model for Analyzing Censored Quantitative Real-Time PCR Data

    PubMed Central

    Boyer, Tim C.; Hanson, Tim; Singer, Randall S.

    2013-01-01

    Analysis of gene quantities measured by quantitative real-time PCR (qPCR) can be complicated by observations that are below the limit of quantification (LOQ) of the assay. A hierarchical model estimated using MCMC methods was developed to analyze qPCR data of genes with observations that fall below the LOQ (censored observations). Simulated datasets with moderate to very high levels of censoring were used to assess the performance of the model; model results were compared to approaches that replace censored observations with a value on the log scale approximating zero or with values ranging from one to the LOQ of ten gene copies. The model was also compared to a Tobit regression model. Finally, all approaches for handling censored observations were evaluated with DNA extracted from samples that were spiked with known quantities of the antibiotic resistance gene tetL. For the simulated datasets, the model outperformed substitution of all values from 1–10 under all censoring scenarios in terms of bias, mean square error, and coverage of 95% confidence intervals for regression parameters. The model performed as well or better than substitution of a value approximating zero under two censoring scenarios (approximately 57% and 79% censored values). The model also performed as well or better than Tobit regression in two of three censoring scenarios (approximately 79% and 93% censored values). Under the levels of censoring present in the three scenarios of this study, substitution of any values greater than 0 produced the least accurate results. When applied to data produced from spiked samples, the model produced the lowest mean square error of the three approaches. This model provides a good alternative for analyzing large amounts of left-censored qPCR data when the goal is estimation of population parameters. The flexibility of this approach can accommodate complex study designs such as longitudinal studies. PMID:23741414

  2. Developing accurate survey methods for estimating population sizes and trends of the critically endangered Nihoa Millerbird and Nihoa Finch.

    USGS Publications Warehouse

    Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris

    2012-01-01

    Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa

  3. Estimation of multipath transmission parameters for quantitative ultrasound measurements of bone.

    PubMed

    Dencks, Stefanie; Schmitz, Georg

    2013-09-01

    When applying quantitative ultrasound (QUS) measurements to bone for predicting osteoporotic fracture risk, the multipath transmission of sound waves frequently occurs. In the last 10 years, the interest in separating multipath QUS signals for their analysis awoke, and led to the introduction of several approaches. Here, we compare the performances of the two fastest algorithms proposed for QUS measurements of bone: the modified least-squares Prony method (MLSP), and the space alternating generalized expectation maximization algorithm (SAGE) applied in the frequency domain. In both approaches, the parameters of the transfer functions of the sound propagation paths are estimated. To provide an objective measure, we also analytically derive the Cramér-Rao lower bound of variances for any estimator and arbitrary transmit signals. In comparison with results of Monte Carlo simulations, this measure is used to evaluate both approaches regarding their accuracy and precision. Additionally, with simulations using typical QUS measurement settings, we illustrate the limitations of separating two superimposed waves for varying parameters with focus on their temporal separation. It is shown that for good SNRs around 100 dB, MLSP yields better results when two waves are very close. Additionally, the parameters of the smaller wave are more reliably estimated. If the SNR decreases, the parameter estimation with MLSP becomes biased and inefficient. Then, the robustness to noise of the SAGE clearly prevails. Because a clear influence of the interrelation between the wavelength of the ultrasound signals and their temporal separation is observable on the results, these findings can be transferred to QUS measurements at other sites. The choice of the suitable algorithm thus depends on the measurement conditions. PMID:24658719

  4. GGOS and the EOP - the key role of SLR for a stable estimation of highly accurate Earth orientation parameters

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael

    2016-04-01

    The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.

  5. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    PubMed

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  6. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    PubMed Central

    2010-01-01

    Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene

  7. Quantitatively estimating defects in graphene devices using discharge current analysis method

    PubMed Central

    Jung, Ukjin; Lee, Young Gon; Kang, Chang Goo; Lee, Sangchul; Kim, Jin Ju; Hwang, Hyeon June; Lim, Sung Kwan; Ham, Moon-Ho; Lee, Byoung Hun

    2014-01-01

    Defects of graphene are the most important concern for the successful applications of graphene since they affect device performance significantly. However, once the graphene is integrated in the device structures, the quality of graphene and surrounding environment could only be assessed using indirect information such as hysteresis, mobility and drive current. Here we develop a discharge current analysis method to measure the quality of graphene integrated in a field effect transistor structure by analyzing the discharge current and examine its validity using various device structures. The density of charging sites affecting the performance of graphene field effect transistor obtained using the discharge current analysis method was on the order of 1014/cm2, which closely correlates with the intensity ratio of the D to G bands in Raman spectroscopy. The graphene FETs fabricated on poly(ethylene naphthalate) (PEN) are found to have a lower density of charging sites than those on SiO2/Si substrate, mainly due to reduced interfacial interaction between the graphene and the PEN. This method can be an indispensable means to improve the stability of devices using a graphene as it provides an accurate and quantitative way to define the quality of graphene after the device fabrication. PMID:24811431

  8. Using Modified Contour Deformable Model to Quantitatively Estimate Ultrasound Parameters for Osteoporosis Assessment

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Fu; Du, Yi-Chun; Tsai, Yi-Ting; Chen, Tainsong

    Osteoporosis is a systemic skeletal disease, which is characterized by low bone mass and micro-architectural deterioration of bone tissue, leading to bone fragility. Finding an effective method for prevention and early diagnosis of the disease is very important. Several parameters, including broadband ultrasound attenuation (BUA), speed of sound (SOS), and stiffness index (STI), have been used to measure the characteristics of bone tissues. In this paper, we proposed a method, namely modified contour deformable model (MCDM), bases on the active contour model (ACM) and active shape model (ASM) for automatically detecting the calcaneus contour from quantitative ultrasound (QUS) parametric images. The results show that the difference between the contours detected by the MCDM and the true boundary for the phantom is less than one pixel. By comparing the phantom ROIs, significant relationship was found between contour mean and bone mineral density (BMD) with R=0.99. The influence of selecting different ROI diameters (12, 14, 16 and 18 mm) and different region-selecting methods, including fixed region (ROI fix ), automatic circular region (ROI cir ) and calcaneal contour region (ROI anat ), were evaluated for testing human subjects. Measurements with large ROI diameters, especially using fixed region, result in high position errors (10-45%). The precision errors of the measured ultrasonic parameters for ROI anat are smaller than ROI fix and ROI cir . In conclusion, ROI anat provides more accurate measurement of ultrasonic parameters for the evaluation of osteoporosis and is useful for clinical application.

  9. How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates

    ERIC Educational Resources Information Center

    Otterbach, Steffen; Sousa-Poza, Alfonso

    2010-01-01

    This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…

  10. Acute toxicity estimation by calculation--Tubifex assay and quantitative structure-activity relationships.

    PubMed

    Tichý, Milon; Rucki, Marian; Hanzlíková, Iveta; Roth, Zdenek

    2008-11-01

    A quantitative structure-activity relationship (QSAR) model dependent on log P(n - octanol/water), or log P(OW), was developed with acute toxicity index EC50, the median effective concentration measured as inhibition of movement of the oligochaeta Tubifex tubifex with 3 min exposure, EC50(Tt) (mol/L): log EC50(Tt) = -0.809 (+/-0.035) log P(OW) - 0.495 (+/-0.060), n=82, r=0.931, r2=0.867, residual standard deviation of the estimate 0.315. A learning series for the QSAR model with the oligochaete contained alkanols, alkenols, and alkynols; saturated and unsaturated aldehydes; aniline and chlorinated anilines; phenol and chlorinated phenols; and esters. Three cross-validation procedures proved the robustness and stability of QSAR models with respect to the chemical structure of compounds tested within a series of compounds used in the learning series. Predictive ability was described by q2 .801 (cross-validated r2; predicted variation estimated with cross-validation) in LSO (leave-a structurally series-out) cross-validation. PMID:18522479

  11. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    PubMed

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time. PMID:19587415

  12. Lake number, a quantitative indicator of mixing used to estimate changes in dissolved oxygen

    USGS Publications Warehouse

    Robertson, Dale M.; Imberger, Jorg

    1994-01-01

    Lake Number, LN, values are shown to be quantitative indicators of deep mixing in lakes and reservoirs that can be used to estimate changes in deep water dissolved oxygen (DO) concentrations. LN is a dimensionless parameter defined as the ratio of the moments about the center of volume of the water body, of the stabilizing force of gravity associated with density stratification to the destabilizing forces supplied by wind, cooling, inflow, outflow, and other artificial mixing devices. To demonstrate the universality of this parameter, LN values are used to describe the extent of deep mixing and are compared with changes in DO concentrations in three reservoirs in Australia and four lakes in the U.S.A., which vary in productivity and mixing regimes. A simple model is developed which relates changes in LN values, i.e., the extent of mixing, to changes in near bottom DO concentrations. After calibrating the model for a specific system, it is possible to use real-time LN values, calculated using water temperature profiles and surface wind velocities, to estimate changes in DO concentrations (assuming unchanged trophic conditions).

  13. The quantitative precipitation estimation system for Dallas-Fort Worth (DFW) urban remote sensing network

    NASA Astrophysics Data System (ADS)

    Chen, Haonan; Chandrasekar, V.

    2015-12-01

    The Dallas-Fort Worth (DFW) urban radar network consists of a combination of high resolution X band radars and a standard National Weather Service (NWS) Next-Generation Radar (NEXRAD) system operating at S band frequency. High spatiotemporal-resolution quantitative precipitation estimation (QPE) is one of the important applications of such a network. This paper presents a real-time QPE system developed by the Collaborative Adaptive Sensing of the Atmosphere (CASA) Engineering Research Center for the DFW urban region using both the high resolution X band radar network and the NWS S band radar observations. The specific dual-polarization radar rainfall algorithms at different frequencies (i.e., S- and X-band) and the fusion methodology combining observations at different temporal resolution are described. Radar and rain gauge observations from four rainfall events in 2013 that are characterized by different meteorological phenomena are used to compare the rainfall estimation products of the CASA DFW QPE system to conventional radar products from the national radar network provided by NWS. This high-resolution QPE system is used for urban flash flood mitigations when coupled with hydrological models.

  14. Application of quantitative structure-property relationship analysis to estimate the vapor pressure of pesticides.

    PubMed

    Goodarzi, Mohammad; Coelho, Leandro dos Santos; Honarparvar, Bahareh; Ortiz, Erlinda V; Duchowicz, Pablo R

    2016-06-01

    The application of molecular descriptors in describing Quantitative Structure Property Relationships (QSPR) for the estimation of vapor pressure (VP) of pesticides is of ongoing interest. In this study, QSPR models were developed using multiple linear regression (MLR) methods to predict the vapor pressure values of 162 pesticides. Several feature selection methods, namely the replacement method (RM), genetic algorithms (GA), stepwise regression (SR) and forward selection (FS), were used to select the most relevant molecular descriptors from a pool of variables. The optimum subset of molecular descriptors was used to build a QSPR model to estimate the vapor pressures of the selected pesticides. The Replacement Method improved the predictive ability of vapor pressures and was more reliable for the feature selection of these selected pesticides. The results provided satisfactory MLR models that had a satisfactory predictive ability, and will be important for predicting vapor pressure values for compounds with unknown values. This study may open new opportunities for designing and developing new pesticide. PMID:26890190

  15. Quantitative estimate of commercial fish enhancement by seagrass habitat in southern Australia

    NASA Astrophysics Data System (ADS)

    Blandon, Abigayil; zu Ermgassen, Philine S. E.

    2014-03-01

    Seagrass provides many ecosystem services that are of considerable value to humans, including the provision of nursery habitat for commercial fish stock. Yet few studies have sought to quantify these benefits. As seagrass habitat continues to suffer a high rate of loss globally and with the growing emphasis on compensatory restoration, valuation of the ecosystem services associated with seagrass habitat is increasingly important. We undertook a meta-analysis of juvenile fish abundance at seagrass and control sites to derive a quantitative estimate of the enhancement of juvenile fish by seagrass habitats in southern Australia. Thirteen fish of commercial importance were identified as being recruitment enhanced in seagrass habitat, twelve of which were associated with sufficient life history data to allow for estimation of total biomass enhancement. We applied von Bertalanffy growth models and species-specific mortality rates to the determined values of juvenile enhancement to estimate the contribution of seagrass to commercial fish biomass. The identified species were enhanced in seagrass by 0.98 kg m-2 y-1, equivalent to ˜$A230,000 ha-1 y-1. These values represent the stock enhancement where all fish species are present, as opposed to realized catches. Having accounted for the time lag between fish recruiting to a seagrass site and entering the fishery and for a 3% annual discount rate, we find that seagrass restoration efforts costing $A10,000 ha-1 have a potential payback time of less than five years, and that restoration costing $A629,000 ha-1 can be justified on the basis of enhanced commercial fish recruitment where these twelve fish species are present.

  16. Quantitative risk estimation for a Legionella pneumophila infection due to whirlpool use.

    PubMed

    Bouwknegt, Martijn; Schijven, Jack F; Schalk, Johanna A C; de Roda Husman, Ana Maria

    2013-07-01

    Quantitative microbiological risk assessment was used to quantify the risk associated with the exposure to Legionella pneumophila in a whirlpool. Conceptually, air bubbles ascend to the surface, intercepting Legionella from the traversed water. At the surface the bubble bursts into dominantly noninhalable jet drops and inhalable film drops. Assuming that film drops carry half of the intercepted Legionella, a total of four (95% interval: 1-9) and 4.5×10(4) (4.4×10(4) - 4.7×10(4) ) cfu/min were estimated to be aerosolized for concentrations of 1 and 1,000 legionellas per liter, respectively. Using a dose-response model for guinea pigs to represent humans, infection risks for active whirlpool use with 100 cfu/L water for 15 minutes were 0.29 (∼0.11-0.48) for susceptible males and 0.22 (∼0.06-0.42) for susceptible females. A L. pneumophila concentration of ≥1,000 cfu/L water was estimated to nearly always cause an infection (mean: 0.95; 95% interval: 0.9-∼1). Estimated infection risks were time-dependent, ranging from 0.02 (0-0.11) for 1-minute exposures to 0.93 (0.86-0.97) for 2-hour exposures when the L. pneumophila concentration was 100 cfu/L water. Pool water in Dutch bathing establishments should contain <100 cfu Legionella/L water. This study suggests that stricter provisions might be required to assure adequate public health protection. PMID:23078231

  17. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    NASA Astrophysics Data System (ADS)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  18. Dense and accurate motion and strain estimation in high resolution speckle images using an image-adaptive approach

    NASA Astrophysics Data System (ADS)

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2011-09-01

    Digital image processing methods represent a viable and well acknowledged alternative to strain gauges and interferometric techniques for determining full-field displacements and strains in materials under stress. This paper presents an image adaptive technique for dense motion and strain estimation using high-resolution speckle images that show the analyzed material in its original and deformed states. The algorithm starts by dividing the speckle image showing the original state into irregular cells taking into consideration both spatial and gradient image information present. Subsequently the Newton-Raphson digital image correlation technique is applied to calculate the corresponding motion for each cell. Adaptive spatial regularization in the form of the Geman- McClure robust spatial estimator is employed to increase the spatial consistency of the motion components of a cell with respect to the components of neighbouring cells. To obtain the final strain information, local least-squares fitting using a linear displacement model is performed on the horizontal and vertical displacement fields. To evaluate the presented image partitioning and strain estimation techniques two numerical and two real experiments are employed. The numerical experiments simulate the deformation of a specimen with constant strain across the surface as well as small rigid-body rotations present while real experiments consist specimens that undergo uniaxial stress. The results indicate very good accuracy of the recovered strains as well as better rotation insensitivity compared to classical techniques.

  19. Accurate estimate of the critical exponent nu for self-avoiding walks via a fast implementation of the pivot algorithm.

    PubMed

    Clisby, Nathan

    2010-02-01

    We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773

  20. Estimating bioerosion rate on fossil corals: a quantitative approach from Oligocene reefs (NW Italy)

    NASA Astrophysics Data System (ADS)

    Silvestri, Giulia

    2010-05-01

    Bioerosion of coral reefs, especially when related to the activity of macroborers, is considered to be one of the major processes influencing framework development in present-day reefs. Macroboring communities affecting both living and dead corals are widely distributed also in the fossil record and their role is supposed to be analogously important in determining flourishing vs demise of coral bioconstructions. Nevertheless, many aspects concerning environmental factors controlling the incidence of bioerosion, shifting in composition of macroboring communities and estimation of bioerosion rate in different contexts are still poorly documented and understood. This study presents an attempt to quantify bioerosion rate on reef limestones characteristic of some Oligocene outcrops of the Tertiary Piedmont Basin (NW Italy) and deposited under terrigenous sedimentation within prodelta and delta fan systems. Branching coral rubble-dominated facies have been recognized as prevailing in this context. Depositional patterns, textures, and the generally low incidence of taphonomic features, such as fragmentation and abrasion, suggest relatively quiet waters where coral remains were deposited almost in situ. Thus taphonomic signatures occurring on corals can be reliably used to reconstruct environmental parameters affecting these particular branching coral assemblages during their life and to compare them with those typical of classical clear-water reefs. Bioerosion is sparsely distributed within coral facies and consists of a limited suite of traces, mostly referred to clionid sponges and polychaete and sipunculid worms. The incidence of boring bivalves seems to be generally lower. Together with semi-quantitative analysis of bioerosion rate along vertical logs and horizontal levels, two quantitative methods have been assessed and compared. These consist in the elaboration of high resolution scanned thin sections through software for image analysis (Photoshop CS3) and point

  1. How accurate and precise are limited sampling strategies in estimating exposure to mycophenolic acid in people with autoimmune disease?

    PubMed

    Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E

    2014-03-01

    Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for

  2. Health risks in wastewater irrigation: comparing estimates from quantitative microbial risk analyses and epidemiological studies.

    PubMed

    Mara, D D; Sleigh, P A; Blumenthal, U J; Carr, R M

    2007-03-01

    The combination of standard quantitative microbial risk analysis (QMRA) techniques and 10,000-trial Monte Carlo risk simulations was used to estimate the human health risks associated with the use of wastewater for unrestricted and restricted crop irrigation. A risk of rotavirus infection of 10(-2) per person per year (pppy) was used as the reference level of acceptable risk. Using the model scenario of involuntary soil ingestion for restricted irrigation, the risk of rotavirus infection is approximately 10(-2) pppy when the wastewater contains < or =10(6) Escherichia coli per 100ml and when local agricultural practices are highly mechanised. For labour-intensive agriculture the risk of rotavirus infection is approximately 10(-2) pppy when the wastewater contains < or = 10(5) E. coli per 100ml; however, the wastewater quality should be < or = 10(4) E. coli per 100ml when children under 15 are exposed. With the model scenario of lettuce consumption for unrestricted irrigation, the use of wastewaters containing < or =10(4) E. coli per 100ml results in a rotavirus infection risk of approximately 10(-2) pppy; however, again based on epidemiological evidence from Mexico, the current WHO guideline level of < or =1,000 E. coli per 100ml should be retained for root crops eaten raw. PMID:17402278

  3. Estimation of the patient monitor alarm rate for a quantitative analysis of new alarm settings.

    PubMed

    de Waele, Stijn; Nielsen, Larry; Frassica, Joseph

    2014-01-01

    In many critical care units, default patient monitor alarm settings are not fine-tuned to the vital signs of the patient population. As a consequence there are many alarms. A large fraction of the alarms are not clinically actionable, thus contributing to alarm fatigue. Recent attention to this phenomenon has resulted in attempts in many institutions to decrease the overall alarm load of clinicians by altering the trigger thresholds for monitored parameters. Typically, new alarm settings are defined based on clinical knowledge and patient population norms and tried empirically on new patients without quantitative knowledge about the potential impact of these new settings. We introduce alarm regeneration as a method to estimate the alarm rate of new alarm settings using recorded patient monitor data. This method enables evaluation of several alarm setting scenarios prior to using these settings in the clinical setting. An expression for the alarm rate variance is derived for the calculation of statistical confidence intervals on the results. PMID:25571296

  4. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    PubMed Central

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-01-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1–3.9 ppm or 3–9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990–2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations. PMID:27329411

  5. Quantitative Simulations of MST Visual Receptive Field Properties Using a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, J. A.

    1997-01-01

    We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.

  6. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    NASA Astrophysics Data System (ADS)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1–3.9 ppm or 3–9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990–2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  7. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade.

    PubMed

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-01-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations. PMID:27329411

  8. Quantitation of proteinuria in nephrotic syndrome by spot urine protein creatinine ratio estimation in children.

    PubMed

    Biswas, A; Kumar, R; Chaterjee, A; Ghosh, J K; Basu, K

    2009-01-01

    In Nephrotic Syndrome the amount of protein excretion is a reflection of activity of disease. Quantitative measurement of proteinuria by a 24-hour urine collection has been the accepted method of evaluation. Recent studies have shown that calculation of protein/creatinine ratio in a spot urine sample correlates well with the 24-hour urine protein (24-HUP) excretion. A study was conducted to compare the accuracy of a spot urinary protein/creatinine ratio (P/C ratio) and urinary dipstick with the 24-hour urine protein. Fifty two samples from 26 patients of nephrotic syndrome were collected. This included a 24-hour urine sample followed by the next voided random spot sample. The protein/creatinine ratio was calculated and dipstick was performed on the spot sample. This was compared with the 24-hour urine protein excretion. The correlation between the three samples was statistically highly significant (p<0.001) for all levels of proteinuria. The normal value of protein/creatinine ratio in Indian children was also estimated on 50 normal children admitted in the ward without any renal diseases calculated to be 0.053 (SE of mean+/-0.003). PMID:19182753

  9. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo

    PubMed Central

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  10. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo.

    PubMed

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  11. A quantitative approach for estimating exposure to pesticides in the Agricultural Health Study.

    PubMed

    Dosemeci, Mustafa; Alavanja, Michael C R; Rowland, Andrew S; Mage, David; Zahm, Shelia Hoar; Rothman, Nathaniel; Lubin, Jay H; Hoppin, Jane A; Sandler, Dale P; Blair, Aaron

    2002-03-01

    We developed a quantitative method to estimate long-term chemical-specific pesticide exposures in a large prospective cohort study of more than 58000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and intensity-related information on pesticide exposure such as mixing condition, duration and frequency of application, application methods and personal protective equipment used. In addition, a detailed take-home questionnaire was administered to collect further intensity-related exposure information such as maintenance or repair of mixing and application equipment, work practices and personal hygiene. More than 40% of the enrolled applicators responded to this detailed take-home questionnaire. Two algorithms were developed to identify applicators' exposure scenarios using information from the enrollment and take-home questionnaires separately in the calculation of subject-specific intensity of exposure score to individual pesticides. The 'general algorithm' used four basic variables (i.e. mixing status, application method, equipment repair status and personal protective equipment use) from the enrollment questionnaire and measurement data from the published pesticide exposure literature to calculate estimated intensity of exposure to individual pesticides for each applicator. The 'detailed' algorithm was based on variables in the general algorithm plus additional exposure information from the take-home questionnaire, including types of mixing system used (i.e. enclosed or open), having a tractor with enclosed cab and/or charcoal filter, frequency of washing equipment after application, frequency of replacing old gloves, personal hygiene and changing clothes after a spill. Weighting factors applied in both algorithms were estimated using measurement data from the published pesticide exposure literature and professional judgment. For each study subject, chemical-specific lifetime

  12. Application of (13)C ramp CPMAS NMR with phase-adjusted spinning sidebands (PASS) for the quantitative estimation of carbon functional groups in natural organic matter.

    PubMed

    Ikeya, Kosuke; Watanabe, Akira

    2016-01-01

    The composition of carbon (C) functional groups in natural organic matter (NOM), such as dissolved organic matter, soil organic matter, and humic substances, is frequently estimated using solid-state (13)C NMR techniques. A problem associated with quantitative analysis using general cross polarization/magic angle spinning (CPMAS) spectra is the appearance of spinning side bands (SSBs) split from the original center peaks of sp (2) hybridized C species (i.e., aromatic and carbonyl C). Ramp CP/phase-adjusted side band suppressing (PASS) is a pulse sequence that integrates SSBs separately and quantitatively recovers them into their inherent center peaks. In the present study, the applicability of ramp CP/PASS to NOM analysis was compared with direct polarization (DPMAS), another quantitative method but one that requires a long operation time, and/or a ramp CP/total suppression side band (ramp CP/TOSS) technique, a popular but non-quantitative method for deleting SSBs. The test materials were six soil humic acid samples with various known degrees of aromaticity and two fulvic acids. There were no significant differences in the relative abundance of alkyl C, O-alkyl C, and aromatic C between the ramp CP/PASS and DPMAS methods, while the signal intensities corresponding to aromatic C in the ramp CP/TOSS spectra were consistently less than the values obtained in the ramp CP/PASS spectra. These results indicate that ramp CP/PASS can be used to accurately estimate the C composition of NOM samples. PMID:26522329

  13. Estimation of glacial outburst floods in Himalayan watersheds by means of quantitative modelling

    NASA Astrophysics Data System (ADS)

    Brauner, M.; Agner, P.; Vogl, A.; Leber, D.; Haeusler, H.; Wangda, D.

    2003-04-01

    In the Himalayas intense glacier retreat rates and at quickly developing settlement activity in the downstream valleys, result in dramatically increasing Glacier Lake Outburst risk. As settlement activity concentrates on broad and productive valley areas, being typically 10 to 70 kilometres downstream of the flood source, hazard awareness and preparedness is limited. Therefore application of quantitative assessment methodology is crucial in order to delineate flood prone areas and develop hazard preparedness concepts by means of scenario modelling. For dam breach back-calculation the 1D-simulation tool BREACH is utilised. Generally the initiation by surge waves and the broad sediment size spectrum of tills are difficult to implement. Therefore a tool with long application history has been chosen. The flood propagation is simulated with the 2D-hydraulic simulation model FLO2D which enables water flood and sediment load routing. In three Himalayan watersheds (Pho Chhu valley, Bhutan; Tam Pokhari valley, Nepal) recent Glacier Lake Outbursts (each with more than 20 mill m3 volume) and consecutive floods are simulated and calibrated by means of multi-time morpho-logical information, high water marks, geomorphologic interpretation and eye witness consultation. These calculations show that for these events the dam breach process was slow (within 0.75 to 3 hours), with low flood hydrographs. The flood propagation was governed by a sequence of low-sloping, depositional channel sections, and steep channel section with intense lateral sediment mobilisation and temporary blockage. This resulted in a positive feedback and prolonged the flood. By means of sensitivity analysis the influence of morphological changes during the events and the imporance of the dam breach process to the whole is estimated. It can be shown, that the accuracy of the high water limit is governed by the following processes: sediment mobilisation, breaching process, water volume, morphological changes

  14. Shorter sampling periods and accurate estimates of milk volume and components are possible for pasture based dairy herds milked with automated milking systems.

    PubMed

    Kamphuis, Claudia; Burke, Jennie K; Taukiri, Sarah; Petch, Susan-Fay; Turner, Sally-Anne

    2016-08-01

    Dairy cows grazing pasture and milked using automated milking systems (AMS) have lower milking frequencies than indoor fed cows milked using AMS. Therefore, milk recording intervals used for herd testing indoor fed cows may not be suitable for cows on pasture based farms. We hypothesised that accurate standardised 24 h estimates could be determined for AMS herds with milk recording intervals of less than the Gold Standard (48 hs), but that the optimum milk recording interval would depend on the herd average for milking frequency. The Gold Standard protocol was applied on five commercial dairy farms with AMS, between December 2011 and February 2013. From 12 milk recording test periods, involving 2211 cow-test days and 8049 cow milkings, standardised 24 h estimates for milk volume and milk composition were calculated for the Gold Standard protocol and compared with those collected during nine alternative sampling scenarios, including six shorter sampling periods and three in which a fixed number of milk samples per cow were collected. Results infer a 48 h milk recording protocol is unnecessarily long for collecting accurate estimates during milk recording on pasture based AMS farms. Collection of two milk samples only per cow was optimal in terms of high concordance correlation coefficients for milk volume and components and a low proportion of missed cow-test days. Further research is required to determine the effects of diurnal variations in milk composition on standardised 24 h estimates for milk volume and components, before a protocol based on a fixed number of samples could be considered. Based on the results of this study New Zealand have adopted a split protocol for herd testing based on the average milking frequency for the herd (NZ Herd Test Standard 8100:2015). PMID:27600967

  15. Revisiting borehole strain, typhoons, and slow earthquakes using quantitative estimates of precipitation-induced strain changes

    NASA Astrophysics Data System (ADS)

    Hsu, Ya-Ju; Chang, Yuan-Shu; Liu, Chi-Ching; Lee, Hsin-Ming; Linde, Alan T.; Sacks, Selwyn I.; Kitagawa, Genshio; Chen, Yue-Gau

    2015-06-01

    Taiwan experiences high deformation rates, particularly along its eastern margin where a shortening rate of about 30 mm/yr is experienced in the Longitudinal Valley and the Coastal Range. Four Sacks-Evertson borehole strainmeters have been installed in this area since 2003. Liu et al. (2009) proposed that a number of strain transient events, primarily coincident with low-barometric pressure during passages of typhoons, were due to deep-triggered slow slip. Here we extend that investigation with a quantitative analysis of the strain responses to precipitation as well as barometric pressure and the Earth tides in order to isolate tectonic source effects. Estimates of the strain responses to barometric pressure and groundwater level changes for the different stations vary over the ranges -1 to -3 nanostrain/millibar(hPa) and -0.3 to -1.0 nanostrain/hPa, respectively, consistent with theoretical values derived using Hooke's law. Liu et al. (2009) noted that during some typhoons, including at least one with very heavy rainfall, the observed strain changes were consistent with only barometric forcing. By considering a more extensive data set, we now find that the strain response to rainfall is about -5.1 nanostrain/hPa. A larger strain response to rainfall compared to that to air pressure and water level may be associated with an additional strain from fluid pressure changes that take place due to infiltration of precipitation. Using a state-space model, we remove the strain response to rainfall, in addition to those due to air pressure changes and the Earth tides, and investigate whether corrected strain changes are related to environmental disturbances or tectonic-original motions. The majority of strain changes attributed to slow earthquakes seem rather to be associated with environmental factors. However, some events show remaining strain changes after all corrections. These events include strain polarity changes during passages of typhoons (a characteristic that is

  16. [Quantitative estimation of connection of the heart rate rhythm with motor activity in rat fetuses].

    PubMed

    Vdovichenko, N D; Timofeeva, O P; Bursian, A V

    2014-01-01

    In rat fetuses at E17-20 with preserved placental circulation with use of mathematical analysis there were revealed value and character of connections of slow wave oscillations of the heart rhythm with motor activity for 30 min of observation. In the software "PowerGraph 3.3.8", normalization and filtration of the studied signals were performed at three frequency diapasons: D1 - 0.02-0.2 Hz (5-50 s), D2 - 0.0083-0.02 Hz (50 s-2 min), and D3 - 0.0017-0.0083 Hz (2-10 min). The EMG curves filtrated by diapasons or piezograms were compared with periodograms in the corresponding diapasons of the heart rhythm variations. In the software "Origin 8.0", quantitative estimation of the degree of intersystemic interrelations for each frequency diapason was performed by Pearson correlation of coefficient, by the correlation connection value, and by the time shift of maximum of cross-correlation function. It has been established that in the frequency D1, regardless of age, the connection of heart rhythm oscillations with motor activity is expressed weakly. In the frequency diapason D2, the connection in most cases is located in the zone of weak and moderate correlations. In the multiminute diapason (D3), the connection is more pronounced. The number of animals that have a significant value of the correlation connection rises. The fetal MA fires in the decasecond diapason in all age groups are accompanied by short-time decelerations of the heart rhythms. In the minute diapason, there is observed a transition from positive connections at E17 and E18 to the negative ones at E19-20. Results of the study are considered in association with age-related changes of ratios of positive and negative oscillations of the heart rhythm change depending on the character of motor activity. PMID:25486813

  17. Comparison of quantitative k-edge empirical estimators using an energy-resolved photon-counting detector

    NASA Astrophysics Data System (ADS)

    Zimmerman, Kevin C.; Gilat Schmidt, Taly

    2016-03-01

    Using an energy-resolving photon counting detector, the amount of k-edge material in the x-ray path can be estimated using a process known as material decomposition. However, non-ideal effects within the detector make it difficult to accurately perform this decomposition. This work evaluated the k-edge material decomposition accuracy of two empirical estimators. A neural network estimator and a linearized maximum likelihood estimator with error look-up tables (A-table method) were evaluated through simulations and experiments. Each estimator was trained on system-specific calibration data rather than specific modeling of non-ideal detector effects or the x-ray source spectrum. Projections through a step-wedge calibration phantom consisting of different path lengths through PMMA, aluminum, and a k-edge material was used to train the estimators. The estimators were tested by decomposing data acquired through different path lengths of the basis materials. The estimators had similar performance in the chest phantom simulations with gadolinium. They estimated four of the five densities of gadolinium with less than 2mg/mL bias. The neural networks estimates demonstrated lower bias but higher variance than the A-table estimates in the iodine contrast agent simulations. The neural networks had an experimental variance lower than the CRLB indicating it is a biased estimator. In the experimental study, the k-edge material contribution was estimated with less than 14% bias for the neural network estimator and less than 41% bias for the A-table method.

  18. Assimilation of radar quantitative precipitation estimations in the Canadian Precipitation Analysis (CaPA)

    NASA Astrophysics Data System (ADS)

    Fortin, Vincent; Roy, Guy; Donaldson, Norman; Mahidjiba, Ahmed

    2015-12-01

    The Canadian Precipitation Analysis (CaPA) is a data analysis system used operationally at the Canadian Meteorological Center (CMC) since April 2011 to produce gridded 6-h and 24-h precipitation accumulations in near real-time on a regular grid covering all of North America. The current resolution of the product is 10-km. Due to the low density of the observational network in most of Canada, the system relies on a background field provided by the Regional Deterministic Prediction System (RDPS) of Environment Canada, which is a short-term weather forecasting system for North America. For this reason, the North American configuration of CaPA is known as the Regional Deterministic Precipitation Analysis (RDPA). Early in the development of the CaPA system, weather radar reflectivity was identified as a very promising additional data source for the precipitation analysis, but necessary quality control procedures and bias-correction algorithms were lacking for the radar data. After three years of development and testing, a new version of CaPA-RDPA system was implemented in November 2014 at CMC. This version is able to assimilate radar quantitative precipitation estimates (QPEs) from all 31 operational Canadian weather radars. The radar QPE is used as an observation source and not as a background field, and is subject to a strict quality control procedure, like any other observation source. The November 2014 upgrade to CaPA-RDPA was implemented at the same time as an upgrade to the RDPS system, which brought minor changes to the skill and bias of CaPA-RDPA. This paper uses the frequency bias indicator (FBI), the equitable threat score (ETS) and the departure from the partial mean (DPM) in order to assess the improvements to CaPA-RDPA brought by the assimilation of radar QPE. Verification focuses on the 6-h accumulations, and is done against a network of 65 synoptic stations (approximately two stations per radar) that were withheld from the station data assimilated by Ca

  19. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is

  20. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation

  1. NEXRAD quantitative precipitation estimates, data acquisition, and processing for the DuPage County, Illinois, streamflow-simulation modeling system

    USGS Publications Warehouse

    Ortel, Terry W.; Spies, Ryan R.

    2015-01-01

    Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).

  2. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    NASA Astrophysics Data System (ADS)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  3. Estimating the Potential Toxicity of Chemicals Associated with Hydraulic Fracturing Operations Using Quantitative Structure-Activity Relationship Modeling.

    PubMed

    Yost, Erin E; Stanek, John; DeWoskin, Robert S; Burgoon, Lyle D

    2016-07-19

    The United States Environmental Protection Agency (EPA) identified 1173 chemicals associated with hydraulic fracturing fluids, flowback, or produced water, of which 1026 (87%) lack chronic oral toxicity values for human health assessments. To facilitate the ranking and prioritization of chemicals that lack toxicity values, it may be useful to employ toxicity estimates from quantitative structure-activity relationship (QSAR) models. Here we describe an approach for applying the results of a QSAR model from the TOPKAT program suite, which provides estimates of the rat chronic oral lowest-observed-adverse-effect level (LOAEL). Of the 1173 chemicals, TOPKAT was able to generate LOAEL estimates for 515 (44%). To address the uncertainty associated with these estimates, we assigned qualitative confidence scores (high, medium, or low) to each TOPKAT LOAEL estimate, and found 481 to be high-confidence. For 48 chemicals that had both a high-confidence TOPKAT LOAEL estimate and a chronic oral reference dose from EPA's Integrated Risk Information System (IRIS) database, Spearman rank correlation identified 68% agreement between the two values (permutation p-value =1 × 10(-11)). These results provide support for the use of TOPKAT LOAEL estimates in identifying and prioritizing potentially hazardous chemicals. High-confidence TOPKAT LOAEL estimates were available for 389 of 1026 hydraulic fracturing-related chemicals that lack chronic oral RfVs and OSFs from EPA-identified sources, including a subset of chemicals that are frequently used in hydraulic fracturing fluids. PMID:27172125

  4. Quantitative estimation of minimum offset for multichannel surface-wave survey with actively exciting source

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2006-01-01

    Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.

  5. Quantitative Assessment of Protein Structural Models by Comparison of H/D Exchange MS Data with Exchange Behavior Accurately Predicted by DXCOREX

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Pantazatos, Dennis; Li, Sheng; Hamuro, Yoshitomo; Hilser, Vincent J.; Woods, Virgil L.

    2012-01-01

    Peptide amide hydrogen/deuterium exchange mass spectrometry (DXMS) data are often used to qualitatively support models for protein structure. We have developed and validated a method (DXCOREX) by which exchange data can be used to quantitatively assess the accuracy of three-dimensional (3-D) models of protein structure. The method utilizes the COREX algorithm to predict a protein's amide hydrogen exchange rates by reference to a hypothesized structure, and these values are used to generate a virtual data set (deuteron incorporation per peptide) that can be quantitatively compared with the deuteration level of the peptide probes measured by hydrogen exchange experimentation. The accuracy of DXCOREX was established in studies performed with 13 proteins for which both high-resolution structures and experimental data were available. The DXCOREX-calculated and experimental data for each protein was highly correlated. We then employed correlation analysis of DXCOREX-calculated versus DXMS experimental data to assess the accuracy of a recently proposed structural model for the catalytic domain of a Ca2+-independent phospholipase A2. The model's calculated exchange behavior was highly correlated with the experimental exchange results available for the protein, supporting the accuracy of the proposed model. This method of analysis will substantially increase the precision with which experimental hydrogen exchange data can help decipher challenging questions regarding protein structure and dynamics.

  6. The Overall Impact of Testing on Medical Student Learning: Quantitative Estimation of Consequential Validity

    ERIC Educational Resources Information Center

    Kreiter, Clarence D.; Green, Joseph; Lenoch, Susan; Saiki, Takuya

    2013-01-01

    Given medical education's longstanding emphasis on assessment, it seems prudent to evaluate whether our current research and development focus on testing makes sense. Since any intervention within medical education must ultimately be evaluated based upon its impact on student learning, this report seeks to provide a quantitative accounting of…

  7. Validation and Estimation of Additive Genetic Variation Associated with DNA Tests for Quantitative Beef Cattle Traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The U.S. National Beef Cattle Evaluation Consortium (NBCEC) has been involved in the validation of commercial DNA tests for quantitative beef quality traits since their first appearance on the U.S. market in the early 2000s. The NBCEC Advisory Council initially requested that the NBCEC set up a syst...

  8. Differential Label-free Quantitative Proteomic Analysis of Shewanella oneidensis Cultured under Aerobic and Suboxic Conditions by Accurate Mass and Time Tag Approach

    SciTech Connect

    Fang, Ruihua; Elias, Dwayne A.; Monroe, Matthew E.; Shen, Yufeng; McIntosh, Martin; Wang, Pei; Goddard, Carrie D.; Callister, Stephen J.; Moore, Ronald J.; Gorby, Yuri A.; Adkins, Joshua N.; Fredrickson, Jim K.; Lipton, Mary S.; Smith, Richard D.

    2006-04-01

    We describe the application of liquid chromatography coupled to mass spectrometry (LC/MS) without the use of stable isotope labeling for differential quantitative proteomics analysis of whole cell lysates of Shewanella oneidensis MR-1 cultured under aerobic and sub-oxic conditions. Liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) was used to initially identify peptide sequences, and LC coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR) was used to confirm these identifications, as well as measure relative peptide abundances. 2343 peptides, covering 668 proteins were identified with high confidence and quantified. Among these proteins, a subset of 56 changed significantly using statistical approaches such as SAM, while another subset of 56 that were annotated as performing housekeeping functions remained essentially unchanged in relative abundance. Numerous proteins involved in anaerobic energy metabolism exhibited up to a 10-fold increase in relative abundance when S. oneidensis is transitioned from aerobic to sub-oxic conditions.

  9. Methodologies for the quantitative estimation of toxicant dose to cigarette smokers using physical, chemical and bioanalytical data.

    PubMed

    St Charles, Frank Kelley; McAughey, John; Shepperd, Christopher J

    2013-06-01

    Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10(-5) Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10(-7) Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker. PMID:23742081

  10. Methodologies for the quantitative estimation of toxicant dose to cigarette smokers using physical, chemical and bioanalytical data

    PubMed Central

    McAughey, John; Shepperd, Christopher J.

    2013-01-01

    Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10−5 Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10−7 Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker. PMID:23742081

  11. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  12. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    PubMed

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  13. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction

    PubMed Central

    Lu, Y.; Rong, C.Z.; Zhao, J.Y.; Lao, X.J.; Xie, L.; Li, S.; Qin, X.

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  14. Modeling Bone Surface Morphology: A Fully Quantitative Method for Age-at-Death Estimation Using the Pubic Symphysis.

    PubMed

    Slice, Dennis E; Algee-Hewitt, Bridget F B

    2015-07-01

    The pubic symphysis is widely used in age estimation for the adult skeleton. Standard practice requires the visual comparison of surface morphology against criteria representing predefined phases and the estimation of case-specific age from an age range associated with the chosen phase. Known problems of method and observer error necessitate alternative tools to quantify age-related change in pubic morphology. This paper presents an objective, fully quantitative method for estimating age-at-death from the skeleton, which exploits a variance-based score of surface complexity computed from vertices obtained from a scanner sampling the pubic symphysis. For laser scans from 41 modern American male skeletons, this method produces results that are significantly associated with known age-at-death (RMSE = 17.15 years). Chronological age is predicted, therefore, equally well, if not, better, with this robust, objective, and fully quantitative method than with prevailing phase-aging systems. This method contributes to forensic casework by responding to medico-legal expectations for evidence standards. PMID:25929827

  15. Satellite and Surface-based Quantitative Precipitation Estimation during the Colorado Flood

    NASA Astrophysics Data System (ADS)

    Kucera, Paul; Klepp, Christian; Newman, Andrew

    2015-04-01

    During the period of 9-16 September 2013, a large area of greater than 150 mm of rain, with local amounts of up to 450 mm, fell over a large part of the Colorado Front Range foothills and adjacent plains. This extreme rainfall event caused severe flooding of main river channels and some localized flash flooding which resulted in millions of dollars of damage to private and public properties. The rainfall regime associated with this extreme precipitation event was atypical of storms usually observed in this region. As a result, the satellite and radar rainfall algorithms tuned for this region significantly underestimated the total amount of rainfall. In order to quantify the underestimation and provide insight for improving the radar rainfall estimates for this unique precipitation regime, a comparison study has been conducted using data from several disdrometers that were operating throughout the event. Disdrometers observed over 5000 minutes of rainfall during the event. Analysis of the raindrop spectra indicated that most of the rainfall was comprised of a large number of small drops (< 2 mm in diameter). The raindrop spectra have been stratified by the precipitation regime. For these different regimes, new radar rainfall estimators have been derived from the raindrop spectra. The new estimators have been applied to the radar data to provide new rainfall estimates. These estimates along with satellite-based precipitation estimates have been evaluated using independent rain gauge data. The presentation will provide an overview of the Colorado flood and a summary of results from the precipitation estimation development and analysis.

  16. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    PubMed

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts. PMID:25445290

  17. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions

    PubMed Central

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  18. Methods for the quantitative comparison of molecular estimates of clade age and the fossil record.

    PubMed

    Clarke, Julia A; Boyd, Clint A

    2015-01-01

    Approaches quantifying the relative congruence, or incongruence, of molecular divergence estimates and the fossil record have been limited. Previously proposed methods are largely node specific, assessing incongruence at particular nodes for which both fossil data and molecular divergence estimates are available. These existing metrics, and other methods that quantify incongruence across topologies including entirely extinct clades, have so far not taken into account uncertainty surrounding both the divergence estimates and the ages of fossils. They have also treated molecular divergence estimates younger than previously assessed fossil minimum estimates of clade age as if they were the same as cases in which they were older. However, these cases are not the same. Recovered divergence dates younger than compared oldest known occurrences require prior hypotheses regarding the phylogenetic position of the compared fossil record and standard assumptions about the relative timing of morphological and molecular change to be incorrect. Older molecular dates, by contrast, are consistent with an incomplete fossil record and do not require prior assessments of the fossil record to be unreliable in some way. Here, we compare previous approaches and introduce two new descriptive metrics. Both metrics explicitly incorporate information on uncertainty by utilizing the 95% confidence intervals on estimated divergence dates and data on stratigraphic uncertainty concerning the age of the compared fossils. Metric scores are maximized when these ranges are overlapping. MDI (minimum divergence incongruence) discriminates between situations where molecular estimates are younger or older than known fossils reporting both absolute fit values and a number score for incompatible nodes. DIG range (divergence implied gap range) allows quantification of the minimum increase in implied missing fossil record induced by enforcing a given set of molecular-based estimates. These metrics are used

  19. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    PubMed

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304

  20. FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts

    PubMed Central

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304

  1. Quantitative software models for the estimation of cost, size, and defects

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Bright, L.; Decker, B.; Lum, K.; Mikulski, C.; Powell, J.

    2002-01-01

    The presentation will provide a brief overview of the SQI measurement program as well as describe each of these models and how they are currently being used in supporting JPL project, task and software managers to estimate and plan future software systems and subsystems.

  2. Toward a Quantitative Estimate of Future Heat Wave Mortality under Global Climate Change

    PubMed Central

    Peng, Roger D.; Bobb, Jennifer F.; Tebaldi, Claudia; McDaniel, Larry; Bell, Michelle L.; Dominici, Francesca

    2011-01-01

    Background Climate change is anticipated to affect human health by changing the distribution of known risk factors. Heat waves have had debilitating effects on human mortality, and global climate models predict an increase in the frequency and severity of heat waves. The extent to which climate change will harm human health through changes in the distribution of heat waves and the sources of uncertainty in estimating these effects have not been studied extensively. Objectives We estimated the future excess mortality attributable to heat waves under global climate change for a major U.S. city. Methods We used a database comprising daily data from 1987 through 2005 on mortality from all nonaccidental causes, ambient levels of particulate matter and ozone, temperature, and dew point temperature for the city of Chicago, Illinois. We estimated the associations between heat waves and mortality in Chicago using Poisson regression models. Results Under three different climate change scenarios for 2081–2100 and in the absence of adaptation, the city of Chicago could experience between 166 and 2,217 excess deaths per year attributable to heat waves, based on estimates from seven global climate models. We noted considerable variability in the projections of annual heat wave mortality; the largest source of variation was the choice of climate model. Conclusions The impact of future heat waves on human health will likely be profound, and significant gains can be expected by lowering future carbon dioxide emissions. PMID:21193384

  3. Estimation of Current Breed Differences in Multibreed Genetic Evaluations Using Quantitative and Molecular Approaches

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objectives of this presentation were to review methods used in multibreed approaches in national cattle evaluation, suggest guidelines for utilizing information for research estimates of breed differences, describe the design of the multibreed research program (germplasm evaluation; GPE) at the ...

  4. A quantitative framework for estimating risk of collision between marine mammals and boats

    USGS Publications Warehouse

    Martin, Julien; Sabatier, Quentin; Gowan, Timothy A.; Giraud, Christophe; Gurarie, Eliezer; Calleson, Scott; Ortega-Ortiz, Joel G.; Deutsch, Charles J.; Rycyk, Athena; Koslovsky, Stacie M.

    2016-01-01

    By applying encounter rate theory to the case of boat collisions with marine mammals, we gained new insights about encounter processes between wildlife and watercraft. Our work emphasizes the importance of considering uncertainty when estimating wildlife mortality. Finally, our findings are relevant to other systems and ecological processes involving the encounter between moving agents.

  5. Accurate quantitative measurements of brachial artery cross-sectional vascular area and vascular volume elastic modulus using automated oscillometric measurements: comparison with brachial artery ultrasound

    PubMed Central

    Tomiyama, Yuuki; Yoshinaga, Keiichiro; Fujii, Satoshi; Ochi, Noriki; Inoue, Mamiko; Nishida, Mutumi; Aziki, Kumi; Horie, Tatsunori; Katoh, Chietsugu; Tamaki, Nagara

    2015-01-01

    Increasing vascular diameter and attenuated vascular elasticity may be reliable markers for atherosclerotic risk assessment. However, previous measurements have been complex, operator-dependent or invasive. Recently, we developed a new automated oscillometric method to measure a brachial artery's estimated area (eA) and volume elastic modulus (VE). The aim of this study was to investigate the reliability of new automated oscillometric measurement of eA and VE. Rest eA and VE were measured using the recently developed automated detector with the oscillometric method. eA was estimated using pressure/volume curves and VE was defined as follows (VE=Δ pressure/ (100 × Δ area/area) mm Hg/%). Sixteen volunteers (age 35.2±13.1 years) underwent the oscillometric measurements and brachial ultrasound at rest and under nitroglycerin (NTG) administration. Oscillometric measurement was performed twice on different days. The rest eA correlated with ultrasound-measured brachial artery area (r=0.77, P<0.001). Rest eA and VE measurement showed good reproducibility (eA: intraclass correlation coefficient (ICC)=0.88, VE: ICC=0.78). Under NTG stress, eA was significantly increased (12.3±3.0 vs. 17.1±4.6 mm2, P<0.001), and this was similar to the case with ultrasound evaluation (4.46±0.72 vs. 4.73±0.75 mm, P<0.001). VE was also decreased (0.81±0.16 vs. 0.65±0.11 mm Hg/%, P<0.001) after NTG. Cross-sectional vascular area calculated using this automated oscillometric measurement correlated with ultrasound measurement and showed good reproducibility. Therefore, this is a reliable approach and this modality may have practical application to automatically assess muscular artery diameter and elasticity in clinical or epidemiological settings. PMID:25693851

  6. Quantitative estimation of thermal contact conductance of a real front-end component at SPring-8 front-ends.

    PubMed

    Sano, Mutsumi; Takahashi, Sunao; Mochizuki, Tetsuro; Watanabe, Atsuo; Oura, Masaki; Kitamura, Hideo

    2008-01-01

    The thermal contact conductance (TCC) of a real front-end component at SPring-8 has been quantitatively estimated by comparing the results of experiments with those of finite-element analyses. In this paper one of the methods of predicting the TCC of a real instrument is presented. A metal filter assembly, which is an indirect-cooling instrument, was selected for the estimation of the TCC. The temperature of the metal filter assembly for the maximum heat load of synchrotron radiation was calculated from the TCC that is expected under normal conditions. This study contributes towards the ongoing research program being conducted to investigate the real thermal limitation of all front-end high-heat-load components. PMID:18097071

  7. [Quantitative estimation of CaO content in surface rocks using hyperspectral thermal infrared emissivity].

    PubMed

    Zhang, Li-Fu; Zhang, Xue-Wen; Huang, Zhao-Qiang; Yang, Hang; Zhang, Fei-Zhou

    2011-11-01

    The objective of the present paper is to study the quantitative relationship between the CaO content and the thermal infrared emissivity spectra. The surface spectral emissivity of 23 solid rocks samples were measured in the field and the first derivative of the spectral emissivity was also calculated. Multiple linear regression (MLR), principal component analysis (PCR) and partial least squares regression (PLSR) were modeled and the regression results were compared. The results show that there is a good relationship between CaO content and thermal emissivity spectra features; emissivities become lower when CaO content increases in the 10.3-13 mm region; the first derivative spectra have a better predictive ability compared to the original emissivity spectra. PMID:22242490

  8. Skill Assessment of An Hybrid Technique To Estimate Quantitative Precipitation Forecast For Galicia (nw Spain)

    NASA Astrophysics Data System (ADS)

    Lage, A.; Taboada, J. J.

    Precipitation is the most obvious of the weather elements in its effects on normal life. Numerical weather prediction (NWP) is generally used to produce quantitative precip- itation forecast (QPF) beyond the 1-3 h time frame. These models often fail to predict small-scale variations of rain because of spin-up problems and their coarse spatial and temporal resolution (Antolik, 2000). Moreover, there are some uncertainties about the behaviour of the NWP models in extreme situations (de Bruijn and Brandsma, 2000). Hybrid techniques, combining the benefits of NWP and statistical approaches in a flexible way, are very useful to achieve a good QPF. In this work, a new technique of QPF for Galicia (NW of Spain) is presented. This region has a percentage of rainy days per year greater than 50% with quantities that may cause floods, with human and economical damages. The technique is composed of a NWP model (ARPS) and a statistical downscaling process based on an automated classification scheme of at- mospheric circulation patterns for the Iberian Peninsula (J. Ribalaygua and R. Boren, 1995). Results show that QPF for Galicia is improved using this hybrid technique. [1] Antolik, M.S. 2000 "An Overview of the National Weather Service's centralized statistical quantitative precipitation forecasts". Journal of Hydrology, 239, pp:306- 337. [2] de Bruijn, E.I.F and T. Brandsma "Rainfall prediction for a flooding event in Ireland caused by the remnants of Hurricane Charley". Journal of Hydrology, 239, pp:148-161. [3] Ribalaygua, J. and Boren R. "Clasificación de patrones espaciales de precipitación diaria sobre la España Peninsular". Informes N 3 y 4 del Servicio de Análisis e Investigación del Clima. Instituto Nacional de Meteorología. Madrid. 53 pp.

  9. A quantitative method to estimate high gloss polished tool steel surfaces

    NASA Astrophysics Data System (ADS)

    Rebeggiani, S.; Rosén, B.-G.; Sandberg, A.

    2011-08-01

    Visual estimations are today the most common way to assess the surface quality of moulds and dies; a method that are both subjective and, with today's high demands on surfaces, hardly usable to distinguish between the finest surface qualities. Instead a method based on non-contact 3D-surface texture analysis is suggested. Several types of tool steel samples, manually as well as machine polished, were analysed to study different types of surface defects such as pitting, orange peel and outwardly features. The classification of the defect structures serves as a catalogue where known defects are described. Suggestions of different levels of 'high surface quality' defined in numerical values adapted to high gloss polished tool steel surfaces are presented. The final goal is to develop a new manual that can work as a 'standard' for estimations of tool steel surfaces for steel producers, mould makers, polishers etc.

  10. Quantitative Estimate of the Relation Between Rolling Resistance on Fuel Consumption of Class 8 Tractor Trailers Using Both New and Retreaded Tires (SAE Paper 2014-01-2425)

    EPA Science Inventory

    Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.

  11. Quantitative television fluoroangiography - the optical measurement of dye concentrations and estimation of retinal blood flow

    SciTech Connect

    Greene, M.; Thomas, A.L. Jr.

    1985-06-01

    The development of a system for the measurement of dye concentrations from single retinal vessels during retinal fluorescein angiography is presented and discussed. The system uses a fundus camera modified for TV viewing. Video gating techniques define the areas of the retina to be studied, and video peak detection yields dye concentrations from retinal vessels. The time course of dye concentration is presented and blood flow into the retina is estimated by a time of transit technique.

  12. Coupling radar and lightning data to improve the quantitative estimation of precipitation

    NASA Astrophysics Data System (ADS)

    François, B.; Molinié, G.; Betz, H. D.

    2009-09-01

    Forecasts in hydrology require rainfall intensity estimations at temporal scale of few tens of minutes and at spatial scales of few kilometer squares. Radars are the most efficient apparatus to provide such data. However, estimate the rainfall intensity (R) from the radar reflectivity (Z) is based on empirical Z-R relationships which are not robusts. Indeed, the Z-R relationships depend on hydrometeor types. The role of Lightning flashes in thunderclouds is to relax the electrical constraints. Generations of thundercloud electrical charges are due to thermodynamical and microphysical processes. Based on these physical considerations, Blyth et al. (2001) have derived a relationship between the product of ascending and descending hydrometeor fluxes and the lightning flash rate. Deierling et al. (2008) succesfully applied this relationship to data from the STERAO-A and STEPS field campains. We have applied the methodology described in Deierling et al. (2008) to operational radar (Météo-France network) and lightning (LINET) data. As these data don't allow to compute the ascending hydrometeor flux and as the descending mass flux is highly parameterized, thundercloud simulations (MésoNH) are used to assess the role of ascending fluxes and the estimated precipitating fluxes. In order to assess the budget of the Blyth et al. (2008) equation terms, the electrified version of MésoNH, including lightning, is run.

  13. Quantitative Estimates of Sequence Divergence for Comparative Analyses of Mammalian Genomes

    PubMed Central

    Cooper, Gregory M.; Brudno, Michael; Program, NISC Comparative Sequencing; Green, Eric D.; Batzoglou, Serafim; Sidow, Arend

    2003-01-01

    Comparative sequence analyses on a collection of carefully chosen mammalian genomes could facilitate identification of functional elements within the human genome and allow quantification of evolutionary constraint at the single nucleotide level. High-resolution quantification would be informative for determining the distribution of important positions within functional elements and for evaluating the relative importance of nucleotide sites that carry single nucleotide polymorphisms (SNPs). Because the level of resolution in comparative sequence analyses is a direct function of sequence diversity, we propose that the information content of a candidate mammalian genome be defined as the sequence divergence it would add relative to already-sequenced genomes. We show that reliable estimates of genomic sequence divergence can be obtained from small genomic regions. On the basis of a multiple sequence alignment of ∼1.4 megabases each from eight mammals, we generate such estimates for five unsequenced mammals. Estimates of the neutral divergence in these data suggest that a small number of diverse mammalian genomes in addition to human, mouse, and rat would allow single nucleotide resolution in comparative sequence analyses. [The multiple sequence alignment of the CFTR region and a spreadsheet with the calculations performed, will be available as supplementary information online at www.genome.org.] PMID:12727901

  14. Quantitative estimates of tropical temperature change in lowland Central America during the last 42 ka

    NASA Astrophysics Data System (ADS)

    Grauel, Anna-Lena; Hodell, David A.; Bernasconi, Stefano M.

    2016-03-01

    Determining the magnitude of tropical temperature change during the last glacial period is a fundamental problem in paleoclimate research. Large discrepancies exist in estimates of tropical cooling inferred from marine and terrestrial archives. Here we present a reconstruction of temperature for the last 42 ka from a lake sediment core from Lake Petén Itzá, Guatemala, located at 17°N in lowland Central America. We compared three independent methods of glacial temperature reconstruction: pollen-based temperature estimates, tandem measurements of δ18O in biogenic carbonate and gypsum hydration water, and clumped isotope thermometry. Pollen provides a near-continuous record of temperature change for most of the glacial period but the occurrence of a no-analog pollen assemblage during cold, dry stadials renders temperature estimates unreliable for these intervals. In contrast, the gypsum hydration and clumped isotope methods are limited mainly to the stadial periods when gypsum and biogenic carbonate co-occur. The combination of palynological and geochemical methods leads to a continuous record of tropical temperature change in lowland Central America over the last 42 ka. Furthermore, the gypsum hydration water method and clumped isotope thermometry provide independent estimates of not only temperature, but also the δ18O of lake water that is dependent on the hydrologic balance between evaporation and precipitation over the lake surface and its catchment. The results show that average glacial temperature was cooler in lowland Central America by 5-10 °C relative to the Holocene. The coldest and driest times occurred during North Atlantic stadial events, particularly Heinrich stadials (HSs), when temperature decreased by up to 6 to 10 °C relative to today. This magnitude of cooling is much greater than estimates derived from Caribbean marine records and model simulations. The extreme dry and cold conditions during HSs in the lowland Central America were associated

  15. The Impact of 3D Volume-of-Interest Definition on Accuracy and Precision of Activity Estimation in Quantitative SPECT and Planar Processing Methods

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ

  16. Quantitative PCR-based genome size estimation of the astigmatid mites Sarcoptes scabiei, Psoroptes ovis and Dermatophagoides pteronyssinus

    PubMed Central

    2012-01-01

    Background The lack of genomic data available for mites limits our understanding of their biology. Evolving high-throughput sequencing technologies promise to deliver rapid advances in this area, however, estimates of genome size are initially required to ensure sufficient coverage. Methods Quantitative real-time PCR was used to estimate the genome sizes of the burrowing ectoparasitic mite Sarcoptes scabiei, the non-burrowing ectoparasitic mite Psoroptes ovis, and the free-living house dust mite Dermatophagoides pteronyssinus. Additionally, the chromosome number of S. scabiei was determined by chromosomal spreads of embryonic cells derived from single eggs. Results S. scabiei cells were shown to contain 17 or 18 small (< 2 μM) chromosomes, suggesting an XO sex-determination mechanism. The average estimated genome sizes of S. scabiei and P. ovis were 96 (± 7) Mb and 86 (± 2) Mb respectively, among the smallest arthropod genomes reported to date. The D. pteronyssinus genome was estimated to be larger than its parasitic counterparts, at 151 Mb in female mites and 218 Mb in male mites. Conclusions This data provides a starting point for understanding the genetic organisation and evolution of these astigmatid mites, informing future sequencing projects. A comparitive genomic approach including these three closely related mites is likely to reveal key insights on mite biology, parasitic adaptations and immune evasion. PMID:22214472

  17. Results from the HARPS-N 2014 Campaign to Estimate Accurately the Densities of Planets Smaller than 2.5 Earth Radii

    NASA Astrophysics Data System (ADS)

    Charbonneau, David; Harps-N Collaboration

    2015-01-01

    Although the NASA Kepler Mission has determined the physical sizes of hundreds of small planets, and we have in many cases characterized the star in detail, we know virtually nothing about the planetary masses: There are only 7 planets smaller than 2.5 Earth radii for which there exist published mass estimates with a precision better than 20 percent, the bare minimum value required to begin to distinguish between different models of composition.HARPS-N is an ultra-stable fiber-fed high-resolution spectrograph optimized for the measurement of very precise radial velocities. We have 80 nights of guaranteed time per year, of which half are dedicated to the study of small Kepler planets.In preparation for the 2014 season, we compared all available Kepler Objects of Interest to identify the ones for which our 40 nights could be used most profitably. We analyzed the Kepler light curves to constrain the stellar rotation periods, the lifetimes of active regions on the stellar surface, and the noise that would result in our radial velocities. We assumed various mass-radius relations to estimate the observing time required to achieve a mass measurement with a precision of 15%, giving preference to stars that had been well characterized through asteroseismology. We began by monitoring our long list of targets. Based on preliminary results we then selected our final short list, gathering typically 70 observations per target during summer 2014.These resulting mass measurements will have a signifcant impact on our understanding of these so-called super-Earths and small Neptunes. They would form a core dataset with which the international astronomical community can meaningfully seek to understand these objects and their formation in a quantitative fashion.HARPS-N was funded by the Swiss Space Office, the Harvard Origin of Life Initiative, the Scottish Universities Physics Alliance, the University of Geneva, the Smithsonian Astrophysical Observatory, the Italian National

  18. Multiple automated headspace in-tube extraction for the accurate analysis of relevant wine aroma compounds and for the estimation of their relative liquid-gas transfer rates.

    PubMed

    Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente

    2012-11-30

    An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. PMID:23102525

  19. Quantitative DNA metabarcoding: improved estimates of species proportional biomass using correction factors derived from control material.

    PubMed

    Thomas, Austen C; Deagle, Bruce E; Eveson, J Paige; Harsch, Corie H; Trites, Andrew W

    2016-05-01

    DNA metabarcoding is a powerful new tool allowing characterization of species assemblages using high-throughput amplicon sequencing. The utility of DNA metabarcoding for quantifying relative species abundances is currently limited by both biological and technical biases which influence sequence read counts. We tested the idea of sequencing 50/50 mixtures of target species and a control species in order to generate relative correction factors (RCFs) that account for multiple sources of bias and are applicable to field studies. RCFs will be most effective if they are not affected by input mass ratio or co-occurring species. In a model experiment involving three target fish species and a fixed control, we found RCFs did vary with input ratio but in a consistent fashion, and that 50/50 RCFs applied to DNA sequence counts from various mixtures of the target species still greatly improved relative abundance estimates (e.g. average per species error of 19 ± 8% for uncorrected vs. 3 ± 1% for corrected estimates). To demonstrate the use of correction factors in a field setting, we calculated 50/50 RCFs for 18 harbour seal (Phoca vitulina) prey species (RCFs ranging from 0.68 to 3.68). Applying these corrections to field-collected seal scats affected species percentages from individual samples (Δ 6.7 ± 6.6%) more than population-level species estimates (Δ 1.7 ± 1.2%). Our results indicate that the 50/50 RCF approach is an effective tool for evaluating and correcting biases in DNA metabarcoding studies. The decision to apply correction factors will be influenced by the feasibility of creating tissue mixtures for the target species, and the level of accuracy needed to meet research objectives. PMID:26602877

  20. Long-term accounting for raindrop size distribution variations improves quantitative precipitation estimation by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2016-04-01

    Weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources. The current study is focused on the impact of variations of the raindrop size distribution on radar rainfall estimates. Such variations lead to errors in the estimated rainfall intensity (R) and specific attenuation (k) when using fixed relations for the conversion of the observed reflectivity (Z) into R and k. For non-polarimetric radar, this error source has received relatively little attention compared to other error sources. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed in The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. This especially holds for situations where widespread stratiform precipitation is observed. The best results are obtained when the DSD parameters are optimized. However, the optimized Z-R and Z-k relations show an unrealistic variability that arises from uncorrected error sources. As such, the optimization approach does not result in a realistic DSD shape but instead also accounts for uncorrected error sources resulting in the best radar rainfall adjustment. Therefore, to further improve the quality of preciptitation estimates by weather radar, usage should either be made of polarimetric radar or by extending the network of disdrometers.

  1. Bragg peak prediction from quantitative proton computed tomography using different path estimates.

    PubMed

    Wang, Dongxu; Mackie, T Rockwell; Tomé, Wolfgang A

    2011-02-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  2. Raman spectroscopy of human skin: looking for a quantitative algorithm to reliably estimate human age.

    PubMed

    Pezzotti, Giuseppe; Boffelli, Marco; Miyamori, Daisuke; Uemura, Takeshi; Marunaka, Yoshinori; Zhu, Wenliang; Ikegaya, Hiroshi

    2015-06-01

    The possibility of examining soft tissues by Raman spectroscopy is challenged in an attempt to probe human age for the changes in biochemical composition of skin that accompany aging. We present a proof-of-concept report for explicating the biophysical links between vibrational characteristics and the specific compositional and chemical changes associated with aging. The actual existence of such links is then phenomenologically proved. In an attempt to foster the basics for a quantitative use of Raman spectroscopy in assessing aging from human skin samples, a precise spectral deconvolution is performed as a function of donors' ages on five cadaveric samples, which emphasizes the physical significance and the morphological modifications of the Raman bands. The outputs suggest the presence of spectral markers for age identification from skin samples. Some of them appeared as authentic "biological clocks" for the apparent exactness with which they are related to age. Our spectroscopic approach yields clear compositional information of protein folding and crystallization of lipid structures, which can lead to a precise identification of age from infants to adults. Once statistically validated, these parameters might be used to link vibrational aspects at the molecular scale for practical forensic purposes. PMID:26112367

  3. Raman spectroscopy of human skin: looking for a quantitative algorithm to reliably estimate human age

    NASA Astrophysics Data System (ADS)

    Pezzotti, Giuseppe; Boffelli, Marco; Miyamori, Daisuke; Uemura, Takeshi; Marunaka, Yoshinori; Zhu, Wenliang; Ikegaya, Hiroshi

    2015-06-01

    The possibility of examining soft tissues by Raman spectroscopy is challenged in an attempt to probe human age for the changes in biochemical composition of skin that accompany aging. We present a proof-of-concept report for explicating the biophysical links between vibrational characteristics and the specific compositional and chemical changes associated with aging. The actual existence of such links is then phenomenologically proved. In an attempt to foster the basics for a quantitative use of Raman spectroscopy in assessing aging from human skin samples, a precise spectral deconvolution is performed as a function of donors' ages on five cadaveric samples, which emphasizes the physical significance and the morphological modifications of the Raman bands. The outputs suggest the presence of spectral markers for age identification from skin samples. Some of them appeared as authentic "biological clocks" for the apparent exactness with which they are related to age. Our spectroscopic approach yields clear compositional information of protein folding and crystallization of lipid structures, which can lead to a precise identification of age from infants to adults. Once statistically validated, these parameters might be used to link vibrational aspects at the molecular scale for practical forensic purposes.

  4. Quantitative estimation of pulegone in Mentha longifolia growing in Saudi Arabia. Is it safe to use?

    PubMed

    Alam, Prawez; Saleh, Mahmoud Fayez; Abdel-Kader, Maged Saad

    2016-03-01

    Our TLC study of the volatile oil isolated from Mentha longifolia showed a major UV active spot with higher Rf value than menthol. Based on the fact that the components of the oil from same plant differ quantitatively due to environmental conditions, the major spot was isolated using different chromatographic techniques and identified by spectroscopic means as pulegone. The presence of pulegone in M. longifolia, a plant widely used in Saudi Arabia, raised a hot debate due to its known toxicity. The Scientific Committee on Food, Health & Consumer Protection Directorate General, European Commission set a limit for the presence of pulegone in foodstuffs and beverages. In this paper we attempted to determine the exact amount of pulegone in different extracts, volatile oil as well as tea flavoured with M. longifolia (Habak) by densitometric HPTLC validated methods using normal phase (Method I) and reverse phase (Method II) TLC plates. The study indicated that the style of use of Habak in Saudi Arabia resulted in much less amount of pulegone than the allowed limit. PMID:27087088

  5. Theoretical framework for quantitatively estimating ultrasound beam intensities using infrared thermography.

    PubMed

    Myers, Matthew R; Giridhar, Dushyanth

    2011-06-01

    In the characterization of high-intensity focused ultrasound (HIFU) systems, it is desirable to know the intensity field within a tissue phantom. Infrared (IR) thermography is a potentially useful method for inferring this intensity field from the heating pattern within the phantom. However, IR measurements require an air layer between the phantom and the camera, making inferences about the thermal field in the absence of the air complicated. For example, convection currents can arise in the air layer and distort the measurements relative to the phantom-only situation. Quantitative predictions of intensity fields based upon IR temperature data are also complicated by axial and radial diffusion of heat. In this paper, mathematical expressions are derived for use with IR temperature data acquired at times long enough that noise is a relatively small fraction of the temperature trace, but small enough that convection currents have not yet developed. The relations were applied to simulated IR data sets derived from computed pressure and temperature fields. The simulation was performed in a finite-element geometry involving a HIFU transducer sonicating upward in a phantom toward an air interface, with an IR camera mounted atop an air layer, looking down at the heated interface. It was found that, when compared to the intensity field determined directly from acoustic propagation simulations, intensity profiles could be obtained from the simulated IR temperature data with an accuracy of better than 10%, at pre-focal, focal, and post-focal locations. PMID:21682428

  6. Quantitative modelling to estimate the transfer of pharmaceuticals through the food production system.

    PubMed

    Chiţescu, Carmen Lidia; Nicolau, Anca Ioana; Römkens, Paul; Van Der Fels-Klerx, H J

    2014-01-01

    Use of pharmaceuticals in animal production may cause an indirect route of contamination of food products of animal origin. This study aimed to assess, through mathematical modelling, the transfer of pharmaceuticals from contaminated soil, through plant uptake, into the dairy food production chain. The scenarios, model parameters, and values refer to contaminants in emission slurry production, storage time, immission into soil, plant uptake, bioaccumulation in the animal's body, and transfer to meat and milk. Modelling results confirm the possibility of contamination of dairy cow's meat and milk due the ingestion of contaminated feed by the cattle. The estimated concentration of pharmaceutical residues obtained for meat ranged from 0 to 6 ng kg(-1) for oxytetracycline, from 0.011 to 0.181 μg kg(-1) for sulfamethoxazole, and from 4.70 to 11.86 μg kg(-1) for ketoconazole. The estimated concentrations for milk were: zero for oxytetracycline, lower than 40 ng L(-1) for sulfamethoxazole, and from 0.98 to 2.48 μg L(-1) for ketoconazole. Results obtained for the three selected pharmaceuticals indicate a minor risk for human health. This study showed that supply chain modelling could be an effective tool in assessing the indirect contamination of feedstuff and animal products by residues of pharmaceuticals. The model can easily be adjusted to other contaminants and supply chain and, in this way, present a valuable tool to underpin decision making. PMID:24813980

  7. Object orientated automated image analysis: quantitative and qualitative estimation of inflammation in mouse lung

    PubMed Central

    Apfeldorfer, Coralie; Ulrich, Kristina; Jones, Gareth; Goodwin, David; Collins, Susie; Schenck, Emanuel; Richard, Virgile

    2008-01-01

    Historically, histopathology evaluation is performed by a pathologist generating a qualitative assessment on thin tissue sections on glass slides. In the past decade, there has been a growing interest for tools able to reduce human subjectivity and improve workload. Whole slide scanning technology combined with object orientated image analysis can offer the capacity of generating fast and reliable results. In the present study, we combined the use of these emerging technologies to characterise a mouse model for chronic asthma. We monitored the inflammatory changes over five weeks by measuring the number of neutrophils and eosinophils present in the tissue, as well as, the bronchiolar associated lymphoid tissue (BALT) area on whole lungs sections. We showed that inflammation assessment could be automated efficiently and reliably. In comparison to human evaluation performed on the same set of sections, computer generated data was more descriptive and fully quantitative. Moreover optimisation of our detection parameters allowed us to be to more sensitive and to generate data in a larger dynamic range to traditional experimental evaluation, such as bronchiolar lavage (BAL) inflammatory cell counts obtained by flow cytometry. We also took advantage of the fact that we could increase the number of samples to be analysed within a day. Such optimisation allowed us to determine the best study design and experimental conditions in order to increase statistical significance between groups. In conclusion, we showed that combination of whole slide digital scanning and image analysis could be fully automated and deliver more descriptive and biologically relevant data over traditional methods evaluating histopathological pulmonary changes observed in this mouse model of chronic asthma. PMID:18673504

  8. Quantitative estimation of aesthesiometric thresholds for assessing impaired tactile sensation in workers exposed to vibration.

    PubMed

    Bovenzi, M; Zadini, A

    1989-01-01

    To evaluate the usefulness of aesthesiometric threshold testing in the quantitative assessment of peripheral sensorineural disorders occurring in the hand-arm vibration syndrome, two point discrimination (TPD) and depth sense perception (DSP) thresholds were measured by means of two aesthesiometers in the fingertips of 65 forestry workers exposed to chain saw vibration and 91 healthy males unexposed to local vibration or neurotoxic chemicals. Among the healthy subjects, divided into three age groups, there was no difference in the mean values of TPD and DSP thresholds. Assuming 1.28 or 2 standard deviations above the mean to be the upper limits of normality, in the present study the threshold values for TPD were 2.5 and 3.13 mm, respectively. Using the same assumptions, the normal threshold values for DSP were 0.36 and 0.49 mm. Among the 65 chain saw operators the prevalence of peripheral sensory disturbances was 70.8%. On the basis of the aesthesiometric results obtained for the group of 46 chain sawyers affected with sensorineural symptoms and a control group of 46 manual workers, the specificity of the aesthesiometric testing method was found to range between 93.4 and 100%, while the sensitivity varied from 52.2 to 71.7%. In its predictive value aesthesiometry had a positive accuracy of 84.6-96.0% and a negative accuracy of 42.8-50.0%. Aesthesiometric testing was able to differentiate between normals and vibration workers with sensory disturbances on a group basis (P less than 0.001), but due to the high rate of false negatives among vibration exposed patients, it was unsuitable to confirm objectively sensorineural symptoms on an individual basis.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2777386

  9. Evaluating the capabilities of Sentinel-2 for quantitative estimation of biophysical variables in vegetation

    NASA Astrophysics Data System (ADS)

    Frampton, William James; Dash, Jadunandan; Watmough, Gary; Milton, Edward James

    2013-08-01

    The red edge position (REP) in the vegetation spectral reflectance is a surrogate measure of vegetation chlorophyll content, and hence can be used to monitor the health and function of vegetation. The Multi-Spectral Instrument (MSI) aboard the future ESA Sentinel-2 (S-2) satellite will provide the opportunity for estimation of the REP at much higher spatial resolution (20 m) than has been previously possible with spaceborne sensors such as Medium Resolution Imaging Spectrometer (MERIS) aboard ENVISAT. This study aims to evaluate the potential of S-2 MSI sensor for estimation of canopy chlorophyll content, leaf area index (LAI) and leaf chlorophyll concentration (LCC) using data from multiple field campaigns. Included in the assessed field campaigns are results from SEN3Exp in Barrax, Spain composed of 35 elementary sampling units (ESUs) of LCC and LAI which have been assessed for correlation with simulated MSI data using a CASI airborne imaging spectrometer. Analysis also presents results from SicilyS2EVAL, a campaign consisting of 25 ESUs in Sicily, Italy supported by a simultaneous Specim Aisa-Eagle data acquisition. In addition, these results were compared to outputs from the PROSAIL model for similar values of biophysical variables in the ESUs. The paper in turn assessed the scope of S-2 for retrieval of biophysical variables using these combined datasets through investigating the performance of the relevant Vegetation Indices (VIs) as well as presenting the novel Inverted Red-Edge Chlorophyll Index (IRECI) and Sentinel-2 Red-Edge Position (S2REP). Results indicated significant relationships between both canopy chlorophyll content and LAI for simulated MSI data using IRECI or the Normalised Difference Vegetation Index (NDVI) while S2REP and the MERIS Terrestrial Chlorophyll Index (MTCI) were found to have the strongest correlation for retrieval of LCC.

  10. Teratogenic potency of valproate analogues evaluated by quantitative estimation of cellular morphology in vitro.

    PubMed

    Berezin, V; Kawa, A; Bojic, U; Foley, A; Nau, H; Regan, C; Edvardsen, K; Bock, E

    1996-10-01

    To develop a simple prescreening system for teratogenicity testing, a novel in vitro assay was established using computer assisted microscopy allowing automatic delineation of contours of stained cells and thereby quantitative determination of cellular morphology. The effects of valproic acid (VPA) and analogues with high as well as low teratogenic activities-(as previously determined in vivo)-were used as probes for study of the discrimination power of the in vitro model. VPA, a teratogenic analogue (+/-)-4-en-VPA, and a non-teratogenic analogue (E)-2-en-VPA, as well as the purified (S)- and (R)-enantiomers of 4-yn-VPA (teratogenic and non-teratogenic, respectively), were tested for their effects on cellular morphology of cloned mouse fibroblastoid L-cell lines, neuroblastoma N2a cells, and rat glioma BT4Cn cells, and were found to induce varying increases in cellular area: Furthermore, it was demonstrated that under the chosen conditions the increase in area correlated statistically significantly with the teratogenic potency of the employed compounds. Setting the cellular area of mouse L-cells to 100% under control conditions, the most pronounced effect was observed for (S)-4-yn-VPA (211%, P = < 0.001) followed by VPA (186%, P < 0.001), 4-en-VPA (169%, P < 0.001) and non-teratogenic 2-en-VPA (137%, P < 0.005) and (R)-4-yn-VPA (105%). This effect was independent of the choice of substrata, since it was observed on L-cells grown on plastic, fibronectin, laminin and Matrigel. However, when VPA-treated cells were exposed to an arginyl-glycyl-aspartate (RGD)-containing peptide to test whether VPA treatment was able to modulate RGD-dependent integrin interactions with components of the extracellular matrix, hardly any effect could be observed, whereas control cells readily detached from the substratum, indicating a changed substrate adhesion of the VPA-treated cells. The data thus indicate that measurement of cellular area may serve as a simple in vitro test in the

  11. THE EVOLUTION OF SOLAR FLUX FROM 0.1 nm TO 160 {mu}m: QUANTITATIVE ESTIMATES FOR PLANETARY STUDIES

    SciTech Connect

    Claire, Mark W.; Sheets, John; Meadows, Victoria S.; Cohen, Martin; Ribas, Ignasi; Catling, David C.

    2012-09-20

    Understanding changes in the solar flux over geologic time is vital for understanding the evolution of planetary atmospheres because it affects atmospheric escape and chemistry, as well as climate. We describe a numerical parameterization for wavelength-dependent changes to the non-attenuated solar flux appropriate for most times and places in the solar system. We combine data from the Sun and solar analogs to estimate enhanced UV and X-ray fluxes for the young Sun and use standard solar models to estimate changing visible and infrared fluxes. The parameterization, a series of multipliers relative to the modern top of the atmosphere flux at Earth, is valid from 0.1 nm through the infrared, and from 0.6 Gyr through 6.7 Gyr, and is extended from the solar zero-age main sequence to 8.0 Gyr subject to additional uncertainties. The parameterization is applied to a representative modern day flux, providing quantitative estimates of the wavelength dependence of solar flux for paleodates relevant to the evolution of atmospheres in the solar system (or around other G-type stars). We validate the code by Monte Carlo analysis of uncertainties in stellar age and flux, and with comparisons to the solar proxies {kappa}{sup 1} Cet and EK Dra. The model is applied to the computation of photolysis rates on the Archean Earth.

  12. Quantitative Estimates of the Numbers of Casualties to be Expected due to Major Earthquakes Near Megacities

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Wenzel, F.

    2004-12-01

    Defining casualties as the sum of the fatalities plus injured, we use their mean number, as calculated by QUAKELOSS (developed by Extreme Situations Research Center, Moscow) as a measure of the extent of possible disasters due to earthquakes. Examples of cities we examined include Algiers, Cairo, Istanbul, Mumbai and Teheran, with populations ranging from about 3 to 20 million. With the assumption that the properties of the building stock has not changed with time since 1950, we find that the number of expected casualties will have increased about 5 to 10 fold by the year 2015. This increase is directly proportional to the increase of the population. For the assumed magnitude, we used M7 and M6.5 because shallow earthquakes in this range can occur in the seismogenic layer, without rupturing the surface. This means, they could occur anywhere in a seismically active area, not only along known faults. As a function of epicentral distance the fraction of casualties of the population decrease from about 6% at 20 km, to 3% at 30 km and 0.5% at 50 km, for an earthquake of M7. At 30 km distance, the assumed variation of the properties of the building stock from country to country give rise to variations of 1% to 5% for the estimate of the percent of the population that become casualties. As a function of earthquake size, the expected number of casualties drop by approximately an order of magnitude for an M6.5, compared to an M7, at 30 km distance. Because the computer code and database in QUAKELOSS are calibrated based on about 1000 earthquakes with fatalities, and verified by real-time loss estimates for about 60 cases, these results are probably of the correct order of magnitude. However, the results should not be taken as overly reliable, because (1) the probability calculations of the losses result in uncertainties of about a factor of two, (2) the method has been tested for medium size cities, not for megacities, and (3) many assumptions were made. Nevertheless, it is

  13. Quantitative estimates of changes in marine and terrestrial primary productivity over the past 300 million years

    PubMed Central

    Beerling, D. J.

    1999-01-01

    Changes in marine primary production over geological time have influenced a network of global biogeochemical cycles with corresponding feedbacks on climate. However, these changes continue to remain largely unquantified because of uncertainties in calculating global estimates from sedimentary palaeoproductivity indicators. I therefore describe a new approach to the problem using a mass balance analysis of the stable isotopes (18O/16O) of oxygen with modelled O2 fluxes and isotopic exchanges by terrestrial vegetation for 300, 150, 100 and 50 million years before present, and the treatment of the Earth as a closed system, with respect to the cycling of O2. Calculated in this way, oceanic net primary productivity was low in the Carboniferous but high (up to four times that of modern oceans) during the Late Jurassic, mid-Cretaceous and early Eocene greenhouse eras with a greater requirement for key nutrients. Such a requirement would be compatible with accelerated rates of continental weathering under the greenhouse conditions of the Mesozoic and early Tertiary. These results indicate possible changes in the strength of a key component of the oceanic carbon (organic and carbonate) pump in the geological past, with a corresponding feedback on atmospheric CO2 and climate, and provide an improved framework for understanding the role of ocean biota in the evolution of the global biogeochemical cycles of C, N and P.

  14. [Quantitative estimation of glycyrrhizic acid and liquiritin contents using in-situ canopy spectroscopy].

    PubMed

    Ding, Ling; Li, Hong-Yi; Zhang, Xue-Wen

    2014-07-01

    The present study is the first to attempt to apply the in situ hyperspectral data of G. uralensis canopy in visible-shortwave infrared region (Vis-SWIR) to estimate quantification of GA and LQ contents of glycyrrhiza uralensis. After first derivative preprocessing and feature bands selection by Wilks' lambda stepwise method, partial least squares (PLS) regression with high performance liquid chromatography (HPLC) as reference was constructed to predict the value of GA and LQ contents, respectively. With the nine selected bands and PLS regression model, GA regression accuracy of R2 is 0.953, root mean square errors of calibration set (RMSEC) is 0.31, prediction accuracy R2 is 0.875 and root mean square errors of validation set (RMSEP) is 0.39; LQ regression accuracy of R2 is 0.932, RMSEC is 0.22, prediction accuracy R2 is 0.883 and RMSEP is 0.27; The results showed that our methods provided acceptable results and implied the ability of determining GA and LQ contents from remotely sensed data. It is recommended that an advanced study be conducted in field condition using airborne and/or spaceborne hyperspectral sensors. PMID:25269311

  15. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    PubMed Central

    Thomson, Peter C

    2003-01-01

    To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE) approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits. PMID:12729549

  16. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits.

    PubMed

    Thomson, Peter C

    2003-01-01

    To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE) approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits. PMID:12729549

  17. A new quantitative approach for estimating bone cell connections from nano-CT images.

    PubMed

    Dong, Pei; Pacureanu, Alexandra; Zuluaga, Maria A; Olivier, Cécile; Frouin, Frédérique; Grimal, Quentin; Peyrin, Françoise

    2013-01-01

    Recent works highlighted the crucial role of the osteocyte system in bone fragility. The number of canaliculi of osteocyte lacuna (Lc.NCa) is an important parameter that reflects the functionality of bone tissue, but rarely reported due to the limitations of current microscopy techniques, and only assessed from 2D histology sections. Previously, we showed the Synchrotron Radiation nanotomography (SR-nanoCT) is a promising technique to image the 3D lacunar-canalicular network. Here we present, for the first time, an automatic method to quantify the connectivity of bone cells in 3D. After segmentation, our method first separates and labels each lacuna in the network. Then, by creating a bounding surface around lacuna, the Lc.NCa is calculated through estimating 3D topological parameters. The proposed method was successfully applied to a 3D SR-nanoCT image of cortical femoral bone. Statistical results on 165 lacunae are reported, showing a mean of 51, which is consistent with the literature. PMID:24110532

  18. Estimating the persistence of organic contaminants in indirect potable reuse systems using quantitative structure activity relationship (QSAR).

    PubMed

    Lim, Seung Joo; Fox, Peter

    2012-09-01

    Predictions from the quantitative structure activity relationship (QSAR) model EPI Suite were modified to estimate the persistence of organic contaminants in indirect potable reuse systems. The modified prediction included the effects of sorption, biodegradation, and oxidation that may occur during sub-surface transport. A retardation factor was used to simulate the mobility of adsorbed compounds during sub-surface transport to a recovery well. A set of compounds with measured persistent properties during sub-surface transport was used to validate the results of the modifications to the predictions of EPI Suite. A comparison of the predicted values and measured values was done and the residual sum of the squares showed the importance of including oxidation and sorption. Sorption was the most important factor to include in predicting the fates of organic chemicals in the sub-surface environment. PMID:22766422

  19. Application of quantitative estimates of fecal hemoglobin concentration for risk prediction of colorectal neoplasia

    PubMed Central

    Liao, Chao-Sheng; Lin, Yu-Min; Chang, Hung-Chuen; Chen, Yu-Hung; Chong, Lee-Won; Chen, Chun-Hao; Lin, Yueh-Shih; Yang, Kuo-Ching; Shih, Chia-Hui

    2013-01-01

    AIM: To determine the role of the fecal immunochemical test (FIT), used to evaluate fecal hemoglobin concentration, in the prediction of histological grade and risk of colorectal tumors. METHODS: We enrolled 17881 individuals who attended the two-step colorectal cancer screening program in a single hospital between January 2010 and October 2011. Colonoscopy was recommended to the participants with an FIT of ≥ 12 ngHb/mL buffer. We classified colorectal lesions as cancer (C), advanced adenoma (AA), adenoma (A), and others (O) by their colonoscopic and histological findings. Multiple linear regression analysis adjusted for age and gender was used to determine the association between the FIT results and colorectal tumor grade. The risk of adenomatous neoplasia was estimated by calculating the positive predictive values for different FIT concentrations. RESULTS: The positive rate of the FIT was 10.9% (1948/17881). The attendance rate for colonoscopy was 63.1% (1229/1948). The number of false positive results was 23. Of these 1229 cases, the numbers of O, A, AA, and C were 759, 221, 201, and 48, respectively. Regression analysis revealed a positive association between histological grade and FIT concentration (β = 0.088, P < 0.01). A significant log-linear relationship was found between the concentration and positive predictive value of the FIT for predicting colorectal tumors (R2 > 0.95, P < 0.001). CONCLUSION: Higher FIT concentrations are associated with more advanced histological grades. Risk prediction for colorectal neoplasia based on individual FIT concentrations is significant and may help to improve the performance of screening programs. PMID:24363529

  20. Quantitative estimates of metamorphic equilibria: Tallassee synform, Dadeville belt, Alabama's Inner Piedmont

    SciTech Connect

    Drummond, M.S.; Neilson, M.J. . Dept. of Geology)

    1993-03-01

    The Tallassee synform is the major structural feature in the western part of the Dadeville belt. This megascopic F2 structure folds amphibolite (Ropes Creek Amphibolite) and metasedimentary units (Agricola Schist, AS), as well as tonalitic (Camp Hill Gneiss, CHG), granitic (Chattasofka Creek Gneiss, CCG), and mafic-ultramafic plutons (Doss Mt. and Slaughters suites). Acadian-age prograde regional metamorphism preceded the F2 folding event, producing the pervasive S1 foliation and metamorphic recrystallization. Prograde mineralogy in the metapelites and metagraywackes of the AS includes garnet, biotite, muscovite, plagioclase, kyanite, sillimanite, and epidote. The intrusive rocks, both felsic and mafic-ultramafic, are occasionally garnetiferous and provide suitable mineral assemblages for P-T evaluation. The AS yields a range of T-P from 512--635C and 5.1--5.5 kb. Muscovite from the AS exhibits an increase in Ti content from 0.07 to 0.15 Ti/22 O formula unit with progressively increasing T's from 512 to 635C. This observation is consistent with other studies that show increasing Ti content with increasing grade. A CHG sample records an average metamorphic T-P of 604C and 5.79 kb. Hornblende-garnet pairs from a Doss Mt. amphibolite sample provides an average metamorphic T of 607C. These data are consistent with regional Barrovian-type middle to upper amphibolite facies metamorphism for the Tallassee synform. Peak metamorphism is represented by kyanite-sillimanite zone conditions and localized migmatization of the AS. The lithotectonic belts bounding the Dadeville belt to the NW and SE are the eastern Blue Ridge and Opelika belts. Studies have shown that these belts have also experienced Acadian-age amphibolite facies metamorphism with comparable P-T estimates to those presented here. These data suggest that the eastern Blue Ridge and Inner Piedmont of AL experienced the same pervasive dynamothermal Barrovian-type metamorphic episode during Acadian orogenesis.

  1. A Quantitative Method to Estimate Vulnerability. Case Study: Motozintla de Mendoza, Chiapas

    NASA Astrophysics Data System (ADS)

    Rodriguez, F.; Novelo-Casanova, D. A.

    2011-12-01

    The community of Motozintla de Mendoza is located in the State of Chiapas, México (15' 22' N and 92' 15' W) near to the international border with Guatemala. Due to its location, this community is continuously exposed to many different hazards. Motozintla has a population of 20,000 inhabitants. This community suffered the impact of had two disasters recently. In view of these scenarios we carried out the present research with the objective quantifying the vulnerability of this community. We prepared a tool that allow us to document the physical vulnerability conducting interviews with people in risk situation. Our tool included the analysis of five elements: household structure and public services, socioeconomic characteristics, community preparation for facing a disaster situation, and risk perception of the inhabitants using a sample statistically significant. Three field works were carried out (October and November 2009, and October 2010) and 444 interviews were registered. Five levels of vulnerability were considered: very high, high, middle, moderate and low. Our region of study was classified spatially and the different estimated levels of vulnerability were located in geo referenced on maps. Our results indicate that the locality has a high level of physical vulnerability because about 74% of the population reports that their household had suffered damages in the past; 86% of the households present low resistance building materials; 70% of the interviewed families has a daily income under five to fifteen dollars; 66% of population does not know any existing Civil Protection Plan; 83% of the population considers that they live in a high level of risk due to floods; finally, the community organization is practically nonexistent. In conclusion, the level of vulnerability of Motozintla is high due to the many factors to which is exposed, in addition, to the structural, socioeconomic and cultural characteristics of their inhabitants. Evidently, those elements of

  2. Validation of quantitative IR thermography for estimating the U-value by a hot box apparatus

    NASA Astrophysics Data System (ADS)

    Nardi, I.; Paoletti, D.; Ambrosini, D.; de Rubeis, T.; Sfarra, S.

    2015-11-01

    Energy saving plays a key role in the reduction of energy consumption and carbon emission, and therefore it is essential for reaching the goal of the 20-20-2020 policy. In particular, buildings are responsible of about 30% of the total amount of Europe energy consumption; the increase of their energy efficiency with the reduction of the thermal transmittance of the envelope is a point of strength with the actions and strategies of the policy makers. Currently, the study of energy performance of buildings is based on international standards, in particular the Italian one allows to calculate the U-value according the ISO 6946 or by in-situ measurements, using a heat flow meter (HFM), following recommendations provided in ISO 9869. In the last few years, a new technique, based on Infrared Thermography (IRT) (also referred to as Infrared Thermovision Technique - ITT), has been proposed for in situ determination of the thermal transmittance of opaque building elements. Some case studies have been reported. This method has already been applied on existing buildings, providing reliable results, but also revealing some weaknesses. In order to overcome such weak points and to assess a systematic procedure for the application of IRT, a validation of the method has been performed in a monitored environment. Infrared camera, the heat flow meter sensors and a nearby meteorological station have been used for thermal transmittance measurement. Comparison between the U-values measured in a hot box with IRT as well as values calculated following international standards and HFM results has been effected. Results give a good description of the advantages, as well as of the open problems, of IR Thermography for estimating the U-value. Further studies will help to refine the technique, and to identify the best operative conditions.

  3. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    PubMed

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process. PMID:21549039

  4. Quantitative Estimates of Cloudiness over the Gulf Stream Locale Using GOES VAS Observations.

    NASA Astrophysics Data System (ADS)

    Alliss, Randall J.; Raman, Sethu

    1995-02-01

    Fields of cloudiness derived from the Geostationary Operational Environmental Satellite VISSR (Visible Infrared Spin Scan Radiometer) Atmospheric Sounder are analyzed over the Gulf Stream locale (GSL) to investigate seasonal and geographical variations. The GSL in this study is defined as the region bounded from 31° to 38°N and 82° to 66°W. This region covers an area that includes the United States mid-Atlantic coast states, the Gulf Stream, and portions of the Sargasso Sea. Clouds over the GSL are found approximately three-quarters of the time between 1985 and 1993. However, large seasonal variations in the frequency of cloudiness exist. These seasonal variations show a distinct relationship to gradients in sea surface temperature (SST). For example, during winter when large SST gradients are present, large gradients in cloudiness are found. Clouds are observed least often during summer over the ocean portion of the GSL. This minimum coincides with an increase in atmospheric stability due to large-scale subsidence. Cloudiness is also found over the GSL in response to mesoscale convergence areas induced by sea surface temperature gradients. Geographical variations in cloudiness are found to be related to the meteorology of the region. During periods of cold-air advection, which are found most frequently in winter, clouds are found less often between the coastline and the core of the Gulf Stream and more often over the Sargasso Sea. During cyclogenesis, large cloud shields often develop and cover the entire domain.Satellite estimates of cloudiness are found to be least reliable over land at night during the cold months. In these situations, the cloud retrieval algorithm often mistakes clear sky for low clouds. Satellite-derived cloudiness over land is compared with daytime surface observations of cloudiness. Results indicate that retrieved cloudiness agrees well with surface observations. Relative humidity fields taken from global analyses are compared with

  5. Contribution of Quantitative Methods of Estimating Mortality Dynamics to Explaining Mechanisms of Aging.

    PubMed

    Shilovsky, G A; Putyatina, T S; Markov, A V; Skulachev, V P

    2015-12-01

    . makes it possible to approximately divide animals and plants only by their levels of the Gompertz type of senescence (i.e. actuarial senescence), whereas susceptibility to biological senescence can be estimated only when principally different models are applied. PMID:26638679

  6. Development of combination tapered fiber-optic biosensor dip probe for quantitative estimation of interleukin-6 in serum samples

    NASA Astrophysics Data System (ADS)

    Wang, Chun Wei; Manne, Upender; Reddy, Vishnu B.; Oelschlager, Denise K.; Katkoori, Venkat R.; Grizzle, William E.; Kapoor, Rakesh

    2010-11-01

    A combination tapered fiber-optic biosensor (CTFOB) dip probe for rapid and cost-effective quantification of proteins in serum samples has been developed. This device relies on diode laser excitation and a charged-coupled device spectrometer and functions on a technique of sandwich immunoassay. As a proof of principle, this technique was applied in a quantitative estimation of interleukin IL-6. The probes detected IL-6 at picomolar levels in serum samples obtained from a patient with lupus, an autoimmune disease, and a patient with lymphoma. The estimated concentration of IL-6 in the lupus sample was 5.9 +/- 0.6 pM, and in the lymphoma sample, it was below the detection limit. These concentrations were verified by a procedure involving bead-based xMAP technology. A similar trend in the concentrations was observed. The specificity of the CTFOB dip probes was assessed by analysis with receiver operating characteristics. This analysis suggests that the dip probes can detect 5-pM or higher concentration of IL-6 in these samples with specificities of 100%. The results provide information for guiding further studies in the utilization of these probes to quantify other analytes in body fluids with high specificity and sensitivity.

  7. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies

    PubMed Central

    Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X.

    2016-01-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the ‘missing heritability,’ which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/. PMID:27224861

  8. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    PubMed

    Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X

    2016-05-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/. PMID:27224861

  9. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  10. Quantitative estimation of farmland soil loss by wind-erosion using improved particle-size distribution comparison method (IPSDC)

    NASA Astrophysics Data System (ADS)

    Rende, Wang; Zhongling, Guo; Chunping, Chang; Dengpan, Xiao; Hongjun, Jiang

    2015-12-01

    The rapid and accurate estimation of soil loss by wind erosion still remains challenge. This study presents an improved scheme for estimating the soil loss by wind erosion of farmland. The method estimates the soil loss by wind erosion based on a comparison of the relative contents of erodible and non-erodible particles between the surface and sub-surface layers of the farmland ploughed layer after wind erosion. It is based on the features that the soil particle-size distribution of the sampling soil layer (approximately 2 cm) is relatively uniform, and that on the surface layer, wind erosion causes the relative numbers of erodible and non-erodible particles to decrease and increase, respectively. Estimations were performed using this method for the wind erosion periods (WEP) from Oct. of 2012 to May of 2013 and from Oct. of 2013 to April of 2014 and a large wind-erosion event (WEE) on May 3, 2014 in the Bashang area of Hebei Province. The results showed that the average soil loss of farmland by wind erosion from Oct. of 2012 to May of 2013 was 2852.14 g/m2 with an average depth of 0.21 cm, while soil loss by wind from Oct. of 2013 to April of 2014 was 1199.17 g/m2 with a mean depth of 0.08 cm. During the severe WEE on May 3, 2014, the average soil loss of farmland by wind erosion was 1299.19 g/m2 with an average depth of 0.10 cm. The soil loss by wind erosion of ploughed and raked fields (PRF) was approximately twice as large as that of oat-stubble fields (OSF). The improved method of particle-size distribution comparison (IPSDC) has several advantages. It can not only calculate the wind erosion amount, but also the wind deposition amount. Slight changes in the sampling thickness and in the particle diameter range of the non-erodible particles will not obviously influence the results. Furthermore, the method is convenient, rapid, simple to implement. It is suitable for estimating the soil loss or deposition by wind erosion of farmland with flat surfaces and high

  11. A Quantitative Method for Comparing the Brightness of Antibody-dye Reagents and Estimating Antibodies Bound per Cell.

    PubMed

    Kantor, Aaron B; Moore, Wayne A; Meehan, Stephen; Parks, David R

    2016-01-01

    We present a quantitative method for comparing the brightness of antibody-dye reagents and estimating antibodies bound per cell. The method is based on complementary binding of test and fill reagents to antibody capture microspheres. Several aliquots of antibody capture beads are stained with varying amounts of the test conjugate. The remaining binding sites on the beads are then filled with a second conjugate containing a different fluorophore. Finally, the fluorescence of the test conjugate compared to the fill conjugate is used to measure the relative brightness of the test conjugate. The fundamental assumption of the test-fill method is that if it takes X molecules of one test antibody to lower the fill signal by Y units, it will take the same X molecules of any other test antibody to give the same effect. We apply a quadratic fit to evaluate the test-fill signal relationship across different amounts of test reagent. If the fit is close to linear, we consider the test reagent to be suitable for quantitative evaluation of antibody binding. To calibrate the antibodies bound per bead, a PE conjugate with 1 PE molecule per antibody is used as a test reagent and the fluorescence scale is calibrated with Quantibrite PE beads. When the fluorescence per antibody molecule has been determined for a particular conjugate, that conjugate can be used for measurement of antibodies bound per cell. This provides comparisons of the brightness of different conjugates when conducted on an instrument whose statistical photoelectron (Spe) scales are known. © 2016 by John Wiley & Sons, Inc. PMID:27367287

  12. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and q

  13. A quantitative estimation of the energetic cost of brown ring disease in the Manila clam using Dynamic Energy Budget theory

    NASA Astrophysics Data System (ADS)

    Flye-Sainte-Marie, Jonathan; Jean, Fred; Paillard, Christine; Kooijman, Sebastiaan A. L. M.

    2009-08-01

    Brown ring disease (BRD) in the Manila clam, Ruditapes philippinarum, is a bacterial disease caused by the pathogen Vibrio tapetis. This disease induces the formation of a characteristic brown conchiolin deposit on the inner shell and is associated with a decrease in condition index indicating that the development of the disease affects the energy balance of the clam. A previous study showed that the energy budget of the host was affected by a decrease in filtration activity, and hypothesized that a second way to degrade the energy balance was the increase in maintenance costs associated to the cost of immune response and lesion repair. This paper focusses on this second way of degradation of the energy balance. A starvation experiment confirmed that the energy balance was affected by BRD, independently of the effects on filtration activity, indicating an increase in the maintenance costs. An energy budget model of the Manila clam, based on DEB theory, was developed and allowed to properly predict weight loss during starvation. Vibrio development and its effects on the energy budget of the host was theoretically introduced in the model. Coupling modelling and experimental observations allowed to provide a quantitative and dynamic estimation of the increase in maintenance costs associated with the development of BRD. The estimation which is given here, indicates that during an infection the maintenance cost can almost double compared to the uninfected situation. Further development of the model, especially focussed on Vibrio dynamics and its effects on filtration activity is needed to provide a more extensive description of the energetic cost of BRD in the Manila clam.

  14. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies

    NASA Astrophysics Data System (ADS)

    Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.

    2015-02-01

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  15. Quantitative microbial risk assessment combined with hydrodynamic modelling to estimate the public health risk associated with bathing after rainfall events.

    PubMed

    Eregno, Fasil Ejigu; Tryland, Ingun; Tjomsland, Torulv; Myrmel, Mette; Robertson, Lucy; Heistad, Arve

    2016-04-01

    This study investigated the public health risk from exposure to infectious microorganisms at Sandvika recreational beaches, Norway and dose-response relationships by combining hydrodynamic modelling with Quantitative Microbial Risk Assessment (QMRA). Meteorological and hydrological data were collected to produce a calibrated hydrodynamic model using Escherichia coli as an indicator of faecal contamination. Based on average concentrations of reference pathogens (norovirus, Campylobacter, Salmonella, Giardia and Cryptosporidium) relative to E. coli in Norwegian sewage from previous studies, the hydrodynamic model was used for simulating the concentrations of pathogens at the local beaches during and after a heavy rainfall event, using three different decay rates. The simulated concentrations were used as input for QMRA and the public health risk was estimated as probability of infection from a single exposure of bathers during the three consecutive days after the rainfall event. The level of risk on the first day after the rainfall event was acceptable for the bacterial and parasitic reference pathogens, but high for the viral reference pathogen at all beaches, and severe at Kalvøya-small and Kalvøya-big beaches, supporting the advice of avoiding swimming in the day(s) after heavy rainfall. The study demonstrates the potential of combining discharge-based hydrodynamic modelling with QMRA in the context of bathing water as a tool to evaluate public health risk and support beach management decisions. PMID:26802355

  16. Development and Validation of an RP-HPLC Method for Quantitative Estimation of Eslicarbazepine Acetate in Bulk Drug and Tablets

    PubMed Central

    Singh, M.; Kumar, L.; Arora, P.; Mathur, S. C.; Saini, P. K.; Singh, R. M.; Singh, G. N.

    2013-01-01

    A convenient, simple, accurate, precise and reproducible RP-HPLC method was developed and validated for the estimation of eslicarbazepine acetate in bulk drug and tablet dosage form. Objective was achieved under optimised chromatographic conditions on Dionex RP-HPLC system with Dionex C18 column (250×4.6 mm, 5 μm particle size) using mobile phase composed of methanol and ammonium acetate (0.005 M) in the ratio of 70:30 v/v. The separation was achieved using an isocratic elution method with a flow rate of 1.0 ml/ min at room temperature. The effluent was monitored at 230 nm using diode array detector. The retention time of eslicarbazepine acetate is found to be 4.9 min and the standard calibration plot was linear over a concentration range of 10-90 μg/ml with r2=0.9995. The limit of detection and quantification were found to be 3.144 and 9.52 μg/ml, respectively. The amount of eslicarbazepine acetate in bulk and tablet dosage form was found to be 99.19 and 97.88%, respectively. The method was validated statistically using the percent relative standard deviation and the values are found to be within the limits. The recovery studies were performed and the percentage recoveries were found to be 98.33± 0.5%. PMID:24591752

  17. Development and Validation of an RP-HPLC Method for Quantitative Estimation of Eslicarbazepine Acetate in Bulk Drug and Tablets.

    PubMed

    Singh, M; Kumar, L; Arora, P; Mathur, S C; Saini, P K; Singh, R M; Singh, G N

    2013-11-01

    A convenient, simple, accurate, precise and reproducible RP-HPLC method was developed and validated for the estimation of eslicarbazepine acetate in bulk drug and tablet dosage form. Objective was achieved under optimised chromatographic conditions on Dionex RP-HPLC system with Dionex C18 column (250×4.6 mm, 5 μm particle size) using mobile phase composed of methanol and ammonium acetate (0.005 M) in the ratio of 70:30 v/v. The separation was achieved using an isocratic elution method with a flow rate of 1.0 ml/ min at room temperature. The effluent was monitored at 230 nm using diode array detector. The retention time of eslicarbazepine acetate is found to be 4.9 min and the standard calibration plot was linear over a concentration range of 10-90 μg/ml with r(2)=0.9995. The limit of detection and quantification were found to be 3.144 and 9.52 μg/ml, respectively. The amount of eslicarbazepine acetate in bulk and tablet dosage form was found to be 99.19 and 97.88%, respectively. The method was validated statistically using the percent relative standard deviation and the values are found to be within the limits. The recovery studies were performed and the percentage recoveries were found to be 98.33± 0.5%. PMID:24591752

  18. High resolution fire danger modeling : integration of quantitative precipitation amount estimates derived from weather radars as an input of FWI

    NASA Astrophysics Data System (ADS)

    Cloppet, E.; Regimbeau, M.

    2009-09-01

    Fire meteo indices provide efficient guidance tools for the prevention, early warning and surveillance of forest fires. The indices are based on meteorological input data. The underlying approach is to exploit meteorological information as fully as possible to model the soil water content, biomass condition and fire danger. Fire meteorological danger is estimated by Météo-France at national level through the use of Fire Weather Index. The fire index services developed within the PREVIEW project (2005-2008) offer for the first time very high resolution mapping of forest fire risk. The high resolution FWI has been implemented in France complementary to the existing EFFIS operated by the Joint Research Center. A new method (ANTILOPE method) of combining precipitation data originating from different sources like rain gauges and weather radar measurements has been applied in the new service. Some of the advantages of this new service are: · Improved detection of local features of fire risk · More accurate analysis of meteorological input data used in forest fire index models providing added value for forest fire risk forecasts · Use of radar precipitation data "as is” utilizing the higher resolution, i.e. avoiding averaging operations The improved accuracy and spatial resolution of the indices provide a powerful early warning tool for national and regional civil protection and fire fighting authorities to alert and initiate forest fire fighting actions and measures.

  19. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the

  20. Quantitative estimation of Tropical Rainfall Mapping Mission precipitation radar signals from ground-based polarimetric radar observations

    NASA Astrophysics Data System (ADS)

    Bolen, Steven M.; Chandrasekar, V.

    2003-06-01

    The Tropical Rainfall Mapping Mission (TRMM) is the first mission dedicated to measuring rainfall from space using radar. The precipitation radar (PR) is one of several instruments aboard the TRMM satellite that is operating in a nearly circular orbit with nominal altitude of 350 km, inclination of 35°, and period of 91.5 min. The PR is a single-frequency Ku-band instrument that is designed to yield information about the vertical storm structure so as to gain insight into the intensity and distribution of rainfall. Attenuation effects on PR measurements, however, can be significant and as high as 10-15 dB. This can seriously impair the accuracy of rain rate retrieval algorithms derived from PR signal returns. Quantitative estimation of PR attenuation is made along the PR beam via ground-based polarimetric observations to validate attenuation correction procedures used by the PR. The reflectivity (Zh) at horizontal polarization and specific differential phase (Kdp) are found along the beam from S-band ground radar measurements, and theoretical modeling is used to determine the expected specific attenuation (k) along the space-Earth path at Ku-band frequency from these measurements. A theoretical k-Kdp relationship is determined for rain when Kdp ≥ 0.5°/km, and a power law relationship, k = a Zhb, is determined for light rain and other types of hydrometers encountered along the path. After alignment and resolution volume matching is made between ground and PR measurements, the two-way path-integrated attenuation (PIA) is calculated along the PR propagation path by integrating the specific attenuation along the path. The PR reflectivity derived after removing the PIA is also compared against ground radar observations.

  1. Quantitative estimation of the replication kinetics of genotype 2 PRRSV strains with different levels of virulence in vitro.

    PubMed

    Dong, Jianguo; Wang, Gang; Liu, Yonggang; Shi, Wenda; Wu, Jianan; Wen, Huiqiang; Wang, Shujie; Tian, Zhijun; Cai, Xuehui

    2016-08-01

    Porcine reproductive and respiratory syndrome virus (PRRSV) has become an important pathogen for the swine industry, and has resulted in substantial economic losses. In 2006, highly pathogenic PRRSV (HP-PRRSV) belonging to genotype 2 was first identified in China. Here, the replication kinetics of genotype 2 PRRSV strains were estimated in vitro in MARC-145 cells and porcine alveolar macrophages (PAMs) using a TaqMan-based real-time quantitative reverse transcription polymerase chain reaction (RT-qPCR) assay. The lower limit of detection was 10 copies/μL, and the assay was linear between 10(1) and 10(8) copies/μL. The intra-assay coefficients of variation were 0.81-1.36%, and the inter-assay coefficients of variation were 1.77-2.56%. Compared to the low pathogenicity CH-1a-F45 strain, the viral loads of the highly pathogenic HuN4-F45 strain were 10(0.5)-10(1.05) and 10(0.84)-10(1.35) times greater in MARC-145 cells and PAMs, respectively from 12 to 96h after infection (P<0.01). This study is the first to demonstrate that the HuN4-F45 strain replicated at higher levels than CH-1a-F45 in MARC-145 cells and PAMs, suggesting that HuN4-F45 has more robust virus amplification efficiency than CH-1a-F45 in vitro. PMID:27091099

  2. Comparison Of Quantitative Precipitation Estimates Derived From Rain Gauge And Radar Derived Algorithms For Operational Flash Flood Support.

    NASA Astrophysics Data System (ADS)

    Streubel, D. P.; Kodama, K.

    2014-12-01

    To provide continuous flash flood situational awareness and to better differentiate severity of ongoing individual precipitation events, the National Weather Service Research Distributed Hydrologic Model (RDHM) is being implemented over Hawaii and Alaska. In the implementation process of RDHM, three gridded precipitation analyses are used as forcing. The first analysis is a radar only precipitation estimate derived from WSR-88D digital hybrid reflectivity, a Z-R relationship and aggregated into an hourly ¼ HRAP grid. The second analysis is derived from a rain gauge network and interpolated into an hourly ¼ HRAP grid using PRISM climatology. The third analysis is derived from a rain gauge network where rain gauges are assigned static pre-determined weights to derive a uniform mean areal precipitation that is applied over a catchment on a ¼ HRAP grid. To assess the effect of different QPE analyses on the accuracy of RDHM simulations and to potentially identify a preferred analysis for operational use, each QPE was used to force RDHM to simulate stream flow for 20 USGS peak flow events. An evaluation of the RDHM simulations was focused on peak flow magnitude, peak flow timing, and event volume accuracy to be most relevant for operational use. Results showed RDHM simulations based on the observed rain gauge amounts were more accurate in simulating peak flow magnitude and event volume relative to the radar derived analysis. However this result was not consistent for all 20 events nor was it consistent for a few of the rainfall events where an annual peak flow was recorded at more than one USGS gage. Implications of this indicate that a more robust QPE forcing with the inclusion of uncertainty derived from the three analyses may provide a better input for simulating extreme peak flow events.

  3. Quantitative Analysis of Radar Returns from Insects

    NASA Technical Reports Server (NTRS)

    Riley, J. R.

    1979-01-01

    When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.

  4. Improvement of radar quantitative precipitation estimation based on real-time adjustments to Z-R relationships and inverse distance weighting correction schemes

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Liu, Liping; Ding, Yuanyuan

    2012-05-01

    The errors in radar quantitative precipitation estimations consist not only of systematic biases caused by random noises but also spatially nonuniform biases in radar rainfall at individual rain-gauge stations. In this study, a real-time adjustment to the radar reflectivity-rainfall rates ( Z-R) relationship scheme and the gauge-corrected, radar-based, estimation scheme with inverse distance weighting interpolation was developed. Based on the characteristics of the two schemes, the two-step correction technique of radar quantitative precipitation estimation is proposed. To minimize the errors between radar quantitative precipitation estimations and rain gauge observations, a real-time adjustment to the Z-R relationship scheme is used to remove systematic bias on the time-domain. The gauge-corrected, radar-based, estimation scheme is then used to eliminate non-uniform errors in space. Based on radar data and rain gauge observations near the Huaihe River, the two-step correction technique was evaluated using two heavy-precipitation events. The results show that the proposed scheme improved not only in the underestimation of rainfall but also reduced the root-mean-square error and the mean relative error of radar-rain gauge pairs.

  5. QUANTITATIVE ESTIMATES OF SOIL INGESTION IN NORMAL CHILDREN BETWEEN THE AGES OF 2 AND 7 YEARS: POPULATION-BASED ESTIMATES USING ALUMINUM, SILICON, AND TITANIUM AS SOIL TRACER ELEMENTS

    EPA Science Inventory

    This investigation was undertaken to provide quantitative estimates of soil ingestion in young children on a population basis, and to identify demographic and behavioral characteristics that influence the amount of soil ingested. 04 children between the ages of 2 and 7 yr were se...

  6. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    PubMed

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. PMID:27179237

  7. ESTIMATION OF MICROBIAL REDUCTIVE TRANSFORMATION RATES FOR CHLORINATED BENZENES AND PHENOLS USING A QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIP APPROACH

    EPA Science Inventory

    A set of literature data was used to derive several quantitative structure-activity relationships (QSARs) to predict the rate constants for the microbial reductive dehalogenation of chlorinated aromatics. Dechlorination rate constants for 25 chloroaromatics were corrected for th...

  8. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    NASA Astrophysics Data System (ADS)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  9. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  10. Estimation of genetic parameters and their sampling variances of quantitative traits in the type 2 modified augmented design

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...

  11. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    traditionally used to estimate spinal cord NTCP may not apply to the dosimetry of SRS. Further research with additional NTCP models is needed.

  12. Quantitative Estimation of the Amount of Fibrosis in the Rat Liver Using Fractal Dimension of the Shape of Power Spectrum

    NASA Astrophysics Data System (ADS)

    Kikuchi, Tsuneo; Nakazawa, Toshihiro; Furukawa, Tetsuo; Higuchi, Toshiyuki; Maruyama, Yukio; Sato, Sojun

    1995-05-01

    This paper describes the quantitative measurement of the amount of fibrosis in the rat liver using the fractal dimension of the shape of power spectrum. The shape of the power spectrum of the scattered echo from biotissues is strongly affected by its internal structure. The fractal dimension, which is one of the important parameters of the fractal theory, is useful to express the complexity of shape of figures such as the power spectrum. From in vitro experiments using rat liver, it was found that this method can be used to quantitatively measure the amount of fibrosis in the liver, and has the possibility for use in the diagnosis of human liver cirrhosis.

  13. Estimation of the genome sizes of the chigger mites Leptotrombidium pallidum and Leptotrombidium scutellare based on quantitative PCR and k-mer analysis

    PubMed Central

    2014-01-01

    Background Leptotrombidium pallidum and Leptotrombidium scutellare are the major vector mites for Orientia tsutsugamushi, the causative agent of scrub typhus. Before these organisms can be subjected to whole-genome sequencing, it is necessary to estimate their genome sizes to obtain basic information for establishing the strategies that should be used for genome sequencing and assembly. Method The genome sizes of L. pallidum and L. scutellare were estimated by a method based on quantitative real-time PCR. In addition, a k-mer analysis of the whole-genome sequences obtained through Illumina sequencing was conducted to verify the mutual compatibility and reliability of the results. Results The genome sizes estimated using qPCR were 191 ± 7 Mb for L. pallidum and 262 ± 13 Mb for L. scutellare. The k-mer analysis-based genome lengths were estimated to be 175 Mb for L. pallidum and 286 Mb for L. scutellare. The estimates from these two independent methods were mutually complementary and within a similar range to those of other Acariform mites. Conclusions The estimation method based on qPCR appears to be a useful alternative when the standard methods, such as flow cytometry, are impractical. The relatively small estimated genome sizes should facilitate whole-genome analysis, which could contribute to our understanding of Arachnida genome evolution and provide key information for scrub typhus prevention and mite vector competence. PMID:24947244

  14. Quantitative estimation of bioclimatic parameters from presence/absence vegetation data in North America by the modern analog technique

    USGS Publications Warehouse

    Thompson, R.S.; Anderson, K.H.; Bartlein, P.J.

    2008-01-01

    The method of modern analogs is widely used to obtain estimates of past climatic conditions from paleobiological assemblages, and despite its frequent use, this method involved so-far untested assumptions. We applied four analog approaches to a continental-scale set of bioclimatic and plant-distribution presence/absence data for North America to assess how well this method works under near-optimal modern conditions. For each point on the grid, we calculated the similarity between its vegetation assemblage and those of all other points on the grid (excluding nearby points). The climate of the points with the most similar vegetation was used to estimate the climate at the target grid point. Estimates based the use of the Jaccard similarity coefficient had smaller errors than those based on the use of a new similarity coefficient, although the latter may be more robust because it does not assume that the "fossil" assemblage is complete. The results of these analyses indicate that presence/absence vegetation assemblages provide a valid basis for estimating bioclimates on the continental scale. However, the accuracy of the estimates is strongly tied to the number of species in the target assemblage, and the analog method is necessarily constrained to produce estimates that fall within the range of observed values. We applied the four modern analog approaches and the mutual overlap (or "mutual climatic range") method to estimate bioclimatic conditions represented by the plant macrofossil assemblage from a packrat midden of Last Glacial Maximum age from southern Nevada. In general, the estimation approaches produced similar results in regard to moisture conditions, but there was a greater range of estimates for growing-degree days. Despite its limitations, the modern analog technique can provide paleoclimatic reconstructions that serve as the starting point to the interpretation of past climatic conditions.

  15. Fully Automated Quantitative Estimation of Volumetric Breast Density from Digital Breast Tomosynthesis Images: Preliminary Results and Comparison with Digital Mammography and MR Imaging.

    PubMed

    Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina

    2016-04-01

    Purpose To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Materials and Methods Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Results Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Conclusion Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment. (©) RSNA, 2015 Online supplemental material is available for this article. PMID:26491909

  16. Shared spatial effects on quantitative genetic parameters: accounting for spatial autocorrelation and home range overlap reduces estimates of heritability in wild red deer.

    PubMed

    Stopher, Katie V; Walling, Craig A; Morris, Alison; Guinness, Fiona E; Clutton-Brock, Tim H; Pemberton, Josephine M; Nussey, Daniel H

    2012-08-01

    Social structure, limited dispersal, and spatial heterogeneity in resources are ubiquitous in wild vertebrate populations. As a result, relatives share environments as well as genes, and environmental and genetic sources of similarity between individuals are potentially confounded. Quantitative genetic studies in the wild therefore typically account for easily captured shared environmental effects (e.g., parent, nest, or region). Fine-scale spatial effects are likely to be just as important in wild vertebrates, but have been largely ignored. We used data from wild red deer to build "animal models" to estimate additive genetic variance and heritability in four female traits (spring and rut home range size, offspring birth weight, and lifetime breeding success). We then, separately, incorporated spatial autocorrelation and a matrix of home range overlap into these models to estimate the effect of location or shared habitat on phenotypic variation. These terms explained a substantial amount of variation in all traits and their inclusion resulted in reductions in heritability estimates, up to an order of magnitude up for home range size. Our results highlight the potential of multiple covariance matrices to dissect environmental, social, and genetic contributions to phenotypic variation, and the importance of considering fine-scale spatial processes in quantitative genetic studies. PMID:22834741

  17. A Correlative Study of Splenic Parasite Score and Peripheral Blood Parasite Load Estimation by Quantitative PCR in Visceral Leishmaniasis.

    PubMed

    Sudarshan, Medhavi; Singh, Toolika; Chakravarty, Jaya; Sundar, Shyam

    2015-12-01

    Parasitological diagnosis of visceral leishmaniasis (VL) by splenic smear is highly sensitive, but it is associated with the risk of severe hemorrhage. In this study, the diagnosis of VL using quantitative PCR (qPCR) in peripheral blood was evaluated in 100 patients with VL. Blood parasitemia ranged from 5 to 93,688 leishmania parasite genomes/ml of blood and positively correlated with splenic score (P<0.0001; r2=0.58). Therefore, quantification of parasite genomes by qPCR can replace invasive procedures for diagnostic and prognostic evaluations. PMID:26400788

  18. Sensitivity Analyses of Exposure Estimates from a Quantitative Job-exposure Matrix (SYN-JEM) for Use in Community-based Studies

    PubMed Central

    Peters, Susan

    2013-01-01

    Objectives: We describe the elaboration and sensitivity analyses of a quantitative job-exposure matrix (SYN-JEM) for respirable crystalline silica (RCS). The aim was to gain insight into the robustness of the SYN-JEM RCS estimates based on critical decisions taken in the elaboration process. Methods: SYN-JEM for RCS exposure consists of three axes (job, region, and year) based on estimates derived from a previously developed statistical model. To elaborate SYN-JEM, several decisions were taken: i.e. the application of (i) a single time trend; (ii) region-specific adjustments in RCS exposure; and (iii) a prior job-specific exposure level (by the semi-quantitative DOM-JEM), with an override of 0 mg/m3 for jobs a priori defined as non-exposed. Furthermore, we assumed that exposure levels reached a ceiling in 1960 and remained constant prior to this date. We applied SYN-JEM to the occupational histories of subjects from a large international pooled community-based case–control study. Cumulative exposure levels derived with SYN-JEM were compared with those from alternative models, described by Pearson correlation (Rp) and differences in unit of exposure (mg/m3-year). Alternative models concerned changes in application of job- and region-specific estimates and exposure ceiling, and omitting the a priori exposure ranking. Results: Cumulative exposure levels for the study subjects ranged from 0.01 to 60 mg/m3-years, with a median of 1.76 mg/m3-years. Exposure levels derived from SYN-JEM and alternative models were overall highly correlated (Rp > 0.90), although somewhat lower when omitting the region estimate (Rp = 0.80) or not taking into account the assigned semi-quantitative exposure level (Rp = 0.65). Modification of the time trend (i.e. exposure ceiling at 1950 or 1970, or assuming a decline before 1960) caused the largest changes in absolute exposure levels (26–33% difference), but without changing the relative ranking (Rp = 0.99). Conclusions: Exposure estimates

  19. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    PubMed Central

    He, Bin; Du, Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-01-01

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT∕CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of

  20. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    SciTech Connect

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-02-15

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  1. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake.

    PubMed

    He, Bin; Du, Yong; Segars, W Paul; Wahl, Richard L; Sgouros, George; Jacene, Heather; Frey, Eric C

    2009-02-01

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans, Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  2. Quantitative trait locus (QTL) mapping using different testers and independent population samples in maize reveals low power of QTL detection and large bias in estimates of QTL effects.

    PubMed

    Melchinger, A E; Utz, H F; Schön, C C

    1998-05-01

    The efficiency of marker-assisted selection (MAS) depends on the power of quantitative trait locus (QTL) detection and unbiased estimation of QTL effects. Two independent samples N = 344 and 107 of F2 plants were genotyped for 89 RFLP markers. For each sample, testcross (TC) progenies of the corresponding F3 lines with two testers were evaluated in four environments. QTL for grain yield and other agronomically important traits were mapped in both samples. QTL effects were estimated from the same data as used for detection and mapping of QTL (calibration) and, based on QTL positions from calibration, from the second, independent sample (validation). For all traits and both testers we detected a total of 107 QTL with N = 344, and 39 QTL with N = 107, of which only 20 were in common. Consistency of QTL effects across testers was in agreement with corresponding genotypic correlations between the two TC series. Most QTL displayed no significant QTL x environment nor epistatic interactions. Estimates of the proportion of the phenotypic and genetic variance explained by QTL were considerably reduced when derived from the independent validation sample as opposed to estimates from the calibration sample. We conclude that, unless QTL effects are estimated from an independent sample, they can be inflated, resulting in an overly optimistic assessment of the efficiency of MAS. PMID:9584111

  3. Comprehensive Energetic Scale for Quantitatively Estimating the Fluorinating Potential of N-F Reagents in Electrophilic Fluorinations.

    PubMed

    Xue, Xiao-Song; Wang, Ya; Li, Man; Cheng, Jin-Pei

    2016-05-20

    Quantitative knowledge of the fluorinating strength of electrophilic N-F reagents is of crucial importance for rational design and optimization of novel reagents and new reactions. Herein, we report the first systematic computation of fluorinating potentials of 130 electrophilic N-F reagents in two commonly used solvents dichloromethane and acetonitrile in terms of the N-F bond heterolysis energies as expressed by the fluorine plus detachment (FPD) values. The calculated FPD scales of 130 N-F reagents cover a range from 112.3 to 290.4 kcal mol(-1) and 110.9 to 278.4 kcal mol(-1) in dichloromethane and acetonitrile, respectively. This comprehensive FPD database provides a valuable quantitative guide for studying the influence of structural variation on the fluorinating strength of the N-F reagents, opening a door to the rational design of novel reagents with appropriate fluorinating strength for specific purposes. It is demonstrated that the FPD values can reproduce the reactivity order for electrophilic N-F reagents better than other parameters. PMID:27120313

  4. Quantitative Estimates of the Social Benefits of Learning, 2: Health (Depression and Obesity). Wider Benefits of Learning Research Report.

    ERIC Educational Resources Information Center

    Feinstein, Leon

    This report used information from two United Kingdom national cohorts to estimate the magnitude of the effects of learning on depression and obesity. Members of the two cohorts were surveyed in 1999-00, when those in the 1970 cohort were age 33 years and those in the 1958 cohort were age 42 years. Overall, education was an important risk factor…

  5. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  6. Apparent Polyploidization after Gamma Irradiation: Pitfalls in the Use of Quantitative Polymerase Chain Reaction (qPCR) for the Estimation of Mitochondrial and Nuclear DNA Gene Copy Numbers

    PubMed Central

    Kam, Winnie W. Y.; Lake, Vanessa; Banos, Connie; Davies, Justin; Banati, Richard

    2013-01-01

    Quantitative polymerase chain reaction (qPCR) has been widely used to quantify changes in gene copy numbers after radiation exposure. Here, we show that gamma irradiation ranging from 10 to 100 Gy of cells and cell-free DNA samples significantly affects the measured qPCR yield, due to radiation-induced fragmentation of the DNA template and, therefore, introduces errors into the estimation of gene copy numbers. The radiation-induced DNA fragmentation and, thus, measured qPCR yield varies with temperature not only in living cells, but also in isolated DNA irradiated under cell-free conditions. In summary, the variability in measured qPCR yield from irradiated samples introduces a significant error into the estimation of both mitochondrial and nuclear gene copy numbers and may give spurious evidence for polyploidization. PMID:23722662

  7. Quantitative estimation of granitoid composition from thermal infrared multispectral scanner (TIMS) data, Desolation Wilderness, northern Sierra Nevada, California

    NASA Technical Reports Server (NTRS)

    Sabine, Charles; Realmuto, Vincent J.; Taranik, James V.

    1994-01-01

    We have produced images that quantitatively depict modal and chemical parameters of granitoids using an image processing algorithm called MINMAP that fits Gaussian curves to normalized emittance spectra recovered from thermal infrared multispectral scanner (TIMS) radiance data. We applied the algorithm to TIMS data from the Desolation Wilderness, an extensively glaciated area near the northern end of the Sierra Nevada batholith that is underlain by Jurassic and Cretaceous plutons that range from diorite and anorthosite to leucogranite. The wavelength corresponding to the calculated emittance minimum lambda(sub min) varies linearly with quartz content, SiO2, and other modal and chemical parameters. Thematic maps of quartz and silica content derived from lambda(sub min) values distinguish bodies of diorite from surrounding granite, identify outcrops of anorthosite, and separate felsic, intermediate, and mafic rocks.

  8. Quantitative estimation of the spin-wave features supported by a spin-torque-driven magnetic waveguide

    SciTech Connect

    Consolo, Giancarlo Currò, Carmela; Valenti, Giovanna

    2014-12-07

    The main features of the spin-waves excited at the threshold via spin-polarized currents in a one-dimensional normally-to-plane magnetized waveguide are quantitatively determined both analytically and numerically. In particular, the dependence of the threshold current, frequency, wavenumber, and decay length is investigated as a function of the size of the nanocontact area through which the electric current is injected. From the analytical viewpoint, such a goal has required to solve the linearized Landau-Lifshitz-Gilbert-Slonczewski equation together with boundary and matching conditions associated with the waveguide geometry. Owing to the complexity of the resulting transcendent system, particular solutions have been obtained in the cases of elongated and contracted nanocontacts. These results have been successfully compared with those arising from numerical integration of the abovementioned transcendent system and with micromagnetic simulations. This quantitative agreement has been achieved thanks to the model here considered which takes explicitly into account the diagonal demagnetizing factors of a rectangular prism as well as the dependence of the relaxation rate on the wavenumber. Our analysis confirmed that the spin-wave features supported by such a waveguide geometry are significantly different from the ones observed in classical two-dimensional nanocontact devices. Moreover, it has been proved that the characteristic parameters depend strongly on the material properties and on the modulus of external field, but they could be independent of the nanocontact length. Finally, it is shown that spin-transfer oscillators based on contracted nanocontacts have a better capability to transmit spin-waves over large distances.

  9. Moving from pixel to object scale when inverting radiative transfer models for quantitative estimation of biophysical variables in vegetation (Invited)

    NASA Astrophysics Data System (ADS)

    Atzberger, C.

    2013-12-01

    The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion

  10. Hawaii Clean Energy Initiative (HCEI) Scenario Analysis: Quantitative Estimates Used to Facilitate Working Group Discussions (2008-2010)

    SciTech Connect

    Braccio, R.; Finch, P.; Frazier, R.

    2012-03-01

    This report provides details on the Hawaii Clean Energy Initiative (HCEI) Scenario Analysis to identify potential policy options and evaluate their impact on reaching the 70% HECI goal, present possible pathways to attain the goal based on currently available technology, with an eye to initiatives under way in Hawaii, and provide an 'order-of-magnitude' cost estimate and a jump-start to action that would be adjusted with a better understanding of the technologies and market.

  11. A semianalytical algorithm for quantitatively estimating sediment and atmospheric deposition flux from MODIS-derived sea ice albedo in the Bohai Sea, China

    NASA Astrophysics Data System (ADS)

    Xu, Zhantang; Hu, Shuibo; Wang, Guifen; Zhao, Jun; Yang, Yuezhong; Cao, Wenxi; Lu, Peng

    2016-05-01

    Quantitative estimates of particulate matter [PM) concentration in sea ice using remote sensing data is helpful for studies of sediment transport and atmospheric dust deposition flux. In this study, the difference between the measured dirty and estimated clean albedo of sea ice was calculated and a relationship between the albedo difference and PM concentration was found using field and laboratory measurements. A semianalytical algorithm for estimating PM concentration in sea ice was established. The algorithm was then applied to MODIS data over the Bohai Sea, China. Comparisons between MODIS derived and in situ measured PM concentration showed good agreement, with a mean absolute percentage difference of 31.2%. From 2005 to 2010, the MODIS-derived annual average PM concentration was approximately 0.025 g/L at the beginning of January. After a month of atmospheric dust deposition, it increased to 0.038 g/L. Atmospheric dust deposition flux was estimated to be 2.50 t/km2/month, similar to 2.20 t/km2/month reported in a previous study. The result was compared with on-site measurements at a nearby ground station. The ground station was close to industrial and residential areas, where larger dust depositions occurred than in the sea, but although there were discrepancies between the absolute magnitudes of the two data sets, they demonstrated similar trends.

  12. Rat- and human-based risk estimates of lung cancer from occupational exposure to poorly-soluble particles: A quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Kuempel, E. D.; Smith, R. J.; Dankovic, D. A.; Stayner, L. T.

    2009-02-01

    In risk assessment there is a need for quantitative evaluation of the capability of animal models to predict disease risks in humans. In this paper, we compare the rat- and human-based excess risk estimates for lung cancer from working lifetime exposures to inhaled poorly-soluble particles. The particles evaluated include those for which long-term dose-response data are available in both species, i.e., coal dust, carbon black, titanium dioxide, silica, and diesel exhaust particulate. The excess risk estimates derived from the rat data were generally lower than those derived from the human studies, and none of the rat- and human-based risk estimates were significantly different (all p-values>0.05). Residual uncertainty in whether the rat-based risk estimates would over- or under-predict the true excess risks of lung cancer from inhaled poorly-soluble particles in humans is due in part to the low power of the available human studies, limited particle size exposure data for humans, and ambiguity about the best animal models and extrapolation methods.

  13. Relationship between N2O Fluxes from an Almond Soil and Denitrifying Bacterial Populations Estimated by Quantitative PCR

    NASA Astrophysics Data System (ADS)

    Matiasek, M.; Suddick, E. C.; Smart, D. R.; Scow, K. M.

    2008-12-01

    Cultivated soils emit substantial quantities of nitrous oxide (N2O), a greenhouse gas with almost 300 times the radiative forcing potential of CO2. Agriculture-related activities generate from 6 to 35 Tg N2O-N per year, or about 60 to 70% of global production. The microbial processes of nitrification, denitrification and nitrifier denitrification are major biogenic sources of N2O to the atmosphere from soils. Denitrification is considered the major source of N2O especially when soils are wet. The microbial N transformations that produce N2O depend primarily on nitrogen (N) fertilizer, with water content, available carbon and soil temperature being secondary controllers. Despite the fact that microbial processes are responsible for N2O emissions, very little is known about the numbers or types of populations involved. The objective of this study was to relate changes in denitrifying population densities, using quantitative PCR (qPCR) of functional genes, to N2O emissions in a fertilized almond orchard. Quantitative PCR targeted three specific genes involved in denitrification: nirS, nirK and nosZ. Copy numbers of the genes were related back to population densities and the portion of organisms likely to produce nitrous oxide. The study site, a 21.7 acre almond orchard fitted with micro-sprinklers, was fertigated (irrigated and fertilized simultaneously) with 50 lbs/acre sodium nitrate in late March 2008, then irrigated weekly. Immediately after the initial fertigation, fluxes of N2O and CO2, moisture content, inorganic N and denitrification gene copy numbers were measured 6 times over 24 days. Despite the fact that N2O emissions increased following fertigation, there was no consistent increase in any of the targeted genes. The genes nirK and nirS ranged from 0.4-1.4 × 107 and 0.4-1.4 × 108, whereas nosZ ranged from 2-8 × 106 copy numbers per g soil, respectively. Considerable variation, compounded by the small sample sizes used for DNA analysis, made it difficult

  14. Quantitative estimates of Mid- to late Holocene Climate Variability in northeastern Siberia inferred from chironomids in lake sediments

    NASA Astrophysics Data System (ADS)

    Nazarova, Larisa; Diekmann, Bernhard; Pestrjakova, Ludmila; Herzschuh, Ulrike; Subetto, Dmitry

    2010-05-01

    Yakutia (Russia, northeastern part of Eurasia) represents one of Earths most extreme climatic settings in the world with deep-reaching frozen ground and a semiarid continental climate with highest seasonal temperature contrasts in the northern hemisphere. The amplitude of temperature variations around the year sometimes exceeds 100oC. There are few examples of quantitative palaeoecological studies in Siberia and these data have to be tested by quantitative studies from other sites in this region, inferred from different proxies and using regional calibration datasets and temperature models that are still lacking. Chironomid midges (Insecta, Diptera, Chironomidae) have been widely used to reconstruct past climate variability in many areas of Western Europe and North America. A chironomid-mean July air temperature inference model has been developed, based on a modern calibration set of 200 lakes sampled along a transect from 110° to 159° E and 61° to73° N in northern Russia. The inference model was applied to sediment cores from 2 lakes in the Central Yakutia in order to reconstruct past July air temperatures. The lacustrine records span mid- to late Holocene. The downcore variability in the chironomid assemblages and the composition of organic matter give evidence of climate-driven and interrelated changes in biological productivity, lacustrine trophic states, and lake-level fluctuations. Three phases of the climate development in Central Yakutia can be derived from the geochemical composition of the lake cores and according to the inferred from chironomid assemblages mean July air ToC. Content of organic matters reached maximal values in the period between 7000-4500 yBP. Sedimentation rate is especially high, numerous molluscs shells are found in sediments. All this along with the reconstructed air temperature confirmed that Mid Holocene optimum in Central Yakutia took place in this period with the maximal temperatures up to 4oC above present day ToC. Strong

  15. Quantitative estimation of Hyblaea puera NPV production in three larval stages of the teak defoliator, Hyblaea puera (Cramer).

    PubMed

    Biji, C P; Sudheendrakumar, V V; Sajeev, T V

    2006-09-01

    Hyblaea puera nucleoployhedrovirus (HpNPV) is a potential biocontrol agent of the teak defoliator, Hyblaea puera (Cramer) (Lepidoptera: Hyblaeidae). To quantify the growth of the virus in the host larvae, three larval stages of the teak defoliator were subjected to quantitative bioassays using specified dilutions of HpNPV. The HpNPV production was found to be dependent on the dose, incubation period as well as stage specific responses of the host insect used. As larvae matured, production of the virus per mg body weight was not found to be in a constant proportion to the increase in the body weight. The combination which yielded the greatest virus production of 3.55 x 10(9) polyhedral occlusion bodies (POBs) was that in which larva weighing 26-37 mg was fed with 1 x 10(6) POBs, incubated for 6 h and harvested at 72 h post infection (h p.i.). The response of the fourth instar larvae was found to be more productive than the third and fifth instar larvae, which makes it an ideal candidate for mass production of the virus in vivo. PMID:16687178

  16. A quantitative estimate of the function of soft-bottom sheltered coastal areas as essential flatfish nursery habitat

    NASA Astrophysics Data System (ADS)

    Trimoreau, E.; Archambault, B.; Brind'Amour, A.; Lepage, M.; Guitton, J.; Le Pape, O.

    2013-11-01

    Essential fish habitat suitability (EFHS) models and geographic information system (GIS) were combined to describe nursery habitats for three flatfish species (Solea solea, Pleuronectes platessa, Dicologlossa cuneata) in the Bay of Biscay (Western Europe), using physical parameters known or suspected to influence juvenile flatfish spatial distribution and density (i.e. bathymetry, sediment, estuarine influence and wave exposure). The effects of habitat features on juvenile distribution were first calculated from EFHS models, used to identify the habitats in which juvenile are concentrated. The EFHS model for S. solea confirmed previous findings regarding its preference for shallow soft bottom areas and provided new insights relating to the significant effect of wave exposure on nursery habitat suitability. The two other models extended these conclusions with some discrepancies among species related to their respective niches. Using a GIS, quantitative density maps were produced from EFHS models predictions. The respective areas of the different habitats were determined and their relative contributions (density × area) to the total amount of juveniles were calculated at the scale of stock management, in the Bay of Biscay. Shallow and muddy areas contributed to 70% of total juvenile relative abundance whereas only representing 16% of the coastal area, suggesting that they should be considered as essential habitats for these three flatfish species. For S. solea and P. platessa, wave exposure explained the propensity for sheltered areas, where concentration of juveniles was higher. Distribution maps of P. platessa and D. cuneata juveniles also revealed opposite spatial and temporal trends which were explained by the respective biogeographical distributions of these two species, close to their southern and northern limit respectively, and by their responses to hydroclimatic trends.

  17. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  18. A quantitative way to estimate clinical off-target effects for human membrane brain targets in CNS research and development

    PubMed Central

    Spiros, Athan; Geerts, Hugo

    2012-01-01

    Although many preclinical programs in central nervous system research and development intend to develop highly selective and potent molecules directed at the primary target, they often act upon other off-target receptors. The simple rule of taking the ratios of affinities for the candidate drug at the different receptors is flawed since the affinity of the endogenous ligand for that off-target receptor or drug exposure is not taken into account. We have developed a mathematical receptor competition model that takes into account the competition between active drug moiety and the endogenous neurotransmitter to better assess the off-target effects on postsynaptic receptor activation under the correct target exposure conditions. As an example, we investigate the possible functional effects of the weak off-target effects for dopamine-1 receptor (D1R) in a computer simulation of a dopaminergic cortical synapse that is calibrated using published fast-cyclic rodent voltammetry and human imaging data in subjects with different catechol-O-methyltransferase genotypes. We identify the conditions under which off-target effects at the D1R can lead to clinically detectable consequences on cognitive tests, such as the N-back working memory test. We also demonstrate that certain concentrations of dimebolin (Dimebon), a recently tested Alzheimer drug, can affect D1R activation resulting in clinically detectable cognitive decrease. This approach can be extended to other receptor systems and can improve the selection of clinical candidate compounds by potentially dialing-out harmful off-target effects or dialing-in beneficial off-target effects in a quantitative and controlled way.

  19. Accurate ab initio energy gradients in chemical compound space.

    PubMed

    Anatole von Lilienfeld, O

    2009-10-28

    Analytical potential energy derivatives, based on the Hellmann-Feynman theorem, are presented for any pair of isoelectronic compounds. Since energies are not necessarily monotonic functions between compounds, these derivatives can fail to predict the right trends of the effect of alchemical mutation. However, quantitative estimates without additional self-consistency calculations can be made when the Hellmann-Feynman derivative is multiplied with a linearization coefficient that is obtained from a reference pair of compounds. These results suggest that accurate predictions can be made regarding any molecule's energetic properties as long as energies and gradients of three other molecules have been provided. The linearization coefficent can be interpreted as a quantitative measure of chemical similarity. Presented numerical evidence includes predictions of electronic eigenvalues of saturated and aromatic molecular hydrocarbons. PMID:19894922

  20. Quantitative testing of the methodology for genome size estimation in plants using flow cytometry: a case study of the Primulina genus

    PubMed Central

    Wang, Jing; Liu, Juan; Kang, Ming

    2015-01-01

    Flow cytometry (FCM) is a commonly used method for estimating genome size in many organisms. The use of FCM in plants is influenced by endogenous fluorescence inhibitors and may cause an inaccurate estimation of genome size; thus, falsifying the relationship between genome size and phenotypic traits/ecological performance. Quantitative optimization of FCM methodology minimizes such errors, yet there are few studies detailing this methodology. We selected the genus Primulina, one of the most representative and diverse genera of the Old World Gesneriaceae, to evaluate the methodology effect on determining genome size. Our results showed that buffer choice significantly affected genome size estimation in six out of the eight species examined and altered the 2C-value (DNA content) by as much as 21.4%. The staining duration and propidium iodide (PI) concentration slightly affected the 2C-value. Our experiments showed better histogram quality when the samples were stained for 40 min at a PI concentration of 100 μg ml−1. The quality of the estimates was not improved by 1-day incubation in the dark at 4°C or by centrifugation. Thus, our study determined an optimum protocol for genome size measurement in Primulina: LB01 buffer supplemented with 100 μg ml−1 PI and stained for 40 min. This protocol also demonstrated a high universality in other Gesneriaceae genera. We report the genome size of nine Gesneriaceae species for the first time. The results showed substantial genome size variation both within and among the species, with the 2C-value ranging between 1.62 and 2.71 pg. Our study highlights the necessity of optimizing the FCM methodology prior to obtaining reliable genome size estimates in a given taxon. PMID:26042140

  1. Quantitative estimate of heat flow from a mid-ocean ridge axial valley, Raven field, Juan de Fuca Ridge: Observations and inferences

    NASA Astrophysics Data System (ADS)

    Salmi, Marie S.; Johnson, H. Paul; Tivey, Maurice A.; Hutnak, Michael

    2014-09-01

    A systematic heat flow survey using thermal blankets within the Endeavour segment of the Juan de Fuca Ridge axial valley provides quantitative estimates of the magnitude and distribution of conductive heat flow at a mid-ocean ridge, with the goal of testing current models of hydrothermal circulation present within newly formed oceanic crust. Thermal blankets were deployed covering an area of 700 by 450 m in the Raven Hydrothermal vent field area located 400 m north of the Main Endeavour hydrothermal field. A total of 176 successful blanket deployment sites measured heat flow values that ranged from 0 to 31 W m-2. Approximately 53% of the sites recorded values lower than 100 mW m-2, suggesting large areas of seawater recharge and advective extraction of lithospheric heat. High heat flow values were concentrated around relatively small "hot spots." Integration of heat flow values over the Raven survey area gives an estimate of conductive heat output of 0.3 MW, an average of 0.95 W m-2, over the survey area. Fluid circulation cell dimensions and scaling equations allow calculation of a Rayleigh number of approximately 700 in Layer 2A. The close proximity of high and low heat flow areas, coupled with previous estimates of surficial seafloor permeability, argues for the presence of small-scale hydrothermal fluid circulation cells within the high-porosity uppermost crustal layer of the axial seafloor.

  2. Comparison of optical microscopy and quantitative polymerase chain reaction for estimating parasitaemia in patients with kala-azar and modelling infectiousness to the vector Lutzomyia longipalpis

    PubMed Central

    Silva, Jailthon C; Zacarias, Danielle A; Silva, Vladimir C; Rolão, Nuno; Costa, Dorcas L; Costa, Carlos HN

    2016-01-01

    Currently, the only method for identifying infective hosts with Leishmania infantum to the vector Lutzomyia longipalpis is xenodiagnosis. More recently, quantitative polymerase chain reaction (qPCR) has been used to model human reservoir competence by assuming that detection of parasite DNA indicates the presence of viable parasites for infecting vectors. Since this assumption has not been proven, this study aimed to verify this hypothesis. The concentration of amastigotes in the peripheral blood of 30 patients with kala-azar was microscopically verified by leukoconcentration and was compared to qPCR estimates. Parasites were identified in 4.8 mL of peripheral blood from 67% of the patients, at a very low concentration (average 0.3 parasites/mL). However, qPCR showed 93% sensitivity and the estimated parasitaemia was over a thousand times greater, both in blood and plasma, with higher levels in plasma than in blood. Furthermore, the microscopic count of circulating parasites and the qPCR parasitaemia estimates were not mathematically compatible with the published proportions of infected sandflies in xenodiagnostic studies. These findings suggest that qPCR does not measure the concentration of circulating parasites, but rather measures DNA from other sites, and that blood might not be the main source of infection for vectors. PMID:27439033

  3. Comparison of optical microscopy and quantitative polymerase chain reaction for estimating parasitaemia in patients with kala-azar and modelling infectiousness to the vector Lutzomyia longipalpis.

    PubMed

    Silva, Jailthon C; Zacarias, Danielle A; Silva, Vladimir C; Rolão, Nuno; Costa, Dorcas L; Costa, Carlos Hn

    2016-07-18

    Currently, the only method for identifying infective hosts with Leishmania infantum to the vector Lutzomyia longipalpis is xenodiagnosis. More recently, quantitative polymerase chain reaction (qPCR) has been used to model human reservoir competence by assuming that detection of parasite DNA indicates the presence of viable parasites for infecting vectors. Since this assumption has not been proven, this study aimed to verify this hypothesis. The concentration of amastigotes in the peripheral blood of 30 patients with kala-azar was microscopically verified by leukoconcentration and was compared to qPCR estimates. Parasites were identified in 4.8 mL of peripheral blood from 67% of the patients, at a very low concentration (average 0.3 parasites/mL). However, qPCR showed 93% sensitivity and the estimated parasitaemia was over a thousand times greater, both in blood and plasma, with higher levels in plasma than in blood. Furthermore, the microscopic count of circulating parasites and the qPCR parasitaemia estimates were not mathematically compatible with the published proportions of infected sandflies in xenodiagnostic studies. These findings suggest that qPCR does not measure the concentration of circulating parasites, but rather measures DNA from other sites, and that blood might not be the main source of infection for vectors. PMID:27439033

  4. Anatomical and Functional Estimations of Brachial Artery Diameter and Elasticity Using Oscillometric Measurements with a Quantitative Approach

    PubMed Central

    Yoshinaga, Keiichiro; Fujii, Satoshi; Tomiyama, Yuuki; Takeuchi, Keisuke; Tamaki, Nagara

    2016-01-01

    Noninvasive vascular function measurement plays an important role in detecting early stages of atherosclerosis and in evaluating therapeutic responses. In this regard, recently, new vascular function measurements have been developed. These new measurements have been used to evaluate vascular function in coronary arteries, large aortic arteries, or peripheral arteries. Increasing vascular diameter represents vascular remodeling related to atherosclerosis. Attenuated vascular elasticity may be a reliable marker for atherosclerotic risk assessment. However, previous measurements for vascular diameter and vascular elasticity have been complex, operator-dependent, or invasive. Therefore, simple and reliable approaches have been sought. We recently developed a new automated oscillometric method to measure the estimated area (eA) of a brachial artery and its volume elastic modulus (VE). In this review, we further report on this new measurement and other vascular measurements. We report on the reliability of the new automated oscillometric measurement of eA and VE. Based on our findings, this measurement technique should be a reliable approach, and this modality may have practical application to automatically assess muscular artery diameter and elasticity in clinical or epidemiological settings. In this review, we report the characteristics of our new oscillometric measurements and other related vascular function measurements. PMID:27493898

  5. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    NASA Astrophysics Data System (ADS)

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-06-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.

  6. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    PubMed Central

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-01-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures. PMID:27265878

  7. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process.

    PubMed

    Galindo, I; Romero, M C; Sánchez, N; Morales, J M

    2016-01-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures. PMID:27265878

  8. A New UPLC Approach for the Simultaneous Quantitative Estimation of Four Compounds in a Cough Syrup Formulation.

    PubMed

    Turak, Fatma; Güzel, Remziye; Dinç, Erdal

    2016-01-01

    A new ultra-performance liquid chromatographic (UPLC) method was developed for the simultaneous estimation of potassium guaiacolsulfonate (PGS), guaifenesin (GUA), diphenhydramine HCl (DIP) and carbepentane citrate (CAR) in a commercial cough syrup. The chromatographic separation of four compounds PGS, GUA, DIP and CAR was performed on a BEH phenyl column (100 × 2.1 mm, 1.7 µm i.d.) using a mobile phase consisting of acetonitrile and 0.1 M HCl (50 : 50, v/v). In addition, the optimized conditions of the chromatographic analysis were found with the flow rate of 0.38 mL/min, the column temperature of 30°C and the injection volume of 1.2 µL with the photodiode array detection of 220 nm. Calibration curves in the concentration ranges of 10-98 µg/mL for PGS, 5-80 µg/mL for GUA, 5-25 µg/mL for DIP and CAR were computed by the regression of the analyte concentration on the chromatographic peak area. The newly developed UPLC method was validated by analyzing the quaternary mixtures of the related compounds, intraday and interday experiment and standard addition samples. After method validation, the proposed UPLC approach was successfully applied for the analysis of the commercial syrup formulation containing PGS, GUA, DIP and CAR compounds. PMID:26202585

  9. Estimation and Preparation of the Hypervariable Regions I/II Templates for Mitochondrial DNA Typing From Human Bones and Teeth Remains Using Singleplex Quantitative Polymerase Chain Reaction.

    PubMed

    Le, Thien Ngoc; Van Phan, Hieu; Dang, Anh Tuan Mai; Nguyen, Vy Thuy

    2016-09-01

    A method was designed for estimating and sequencing of mitochondrial DNA (mtDNA) that effectively and more quickly provides a complete mtDNA profile. In this context, we have developed this novel strategy for typing mtDNA from 10 bones and teeth remains (3 months to 44 years). The quantification of mtDNA was achieved by singleplex real-time polymerase chain reaction of the hypervariable region I fragment (445 bp) and hypervariable region II fragment (617 bp). Combined with the melting curve analysis, we have determined as little as 10 pg of mtDNA template that is suitable for sequence analysis. Furthermore, quantitative polymerase chain reaction products were directly used for following step of mtDNA typing by Sanger sequencing. This method allows the profile to be completely provided for faster human identification. PMID:27356010

  10. [Study on the quantitative estimation method for VOCs emission from petrochemical storage tanks based on tanks 4.0.9d model].

    PubMed

    Li, Jing; Wang, Min-Yan; Zhang, Jian; He, Wan-Qing; Nie, Lei; Shao, Xia

    2013-12-01

    VOCs emission from petrochemical storage tanks is one of the important emission sources in the petrochemical industry. In order to find out the VOCs emission amount of petrochemical storage tanks, Tanks 4.0.9d model is utilized to calculate the VOCs emission from different kinds of storage tanks. VOCs emissions from a horizontal tank, a vertical fixed roof tank, an internal floating roof tank and an external floating roof tank were calculated as an example. The consideration of the site meteorological information, the sealing information, the tank content information and unit conversion by using Tanks 4.0.9d model in China was also discussed. Tanks 4.0.9d model can be used to estimate VOCs emissions from petrochemical storage tanks in China as a simple and highly accurate method. PMID:24640914

  11. Quantitative and rapid estimations of human sub-surface skin mass using ultra-high-resolution spectral domain optical coherence tomography.

    PubMed

    Kuo, Wen-Chuan; Kuo, Yue-Ming; Wen, Su-Ying

    2016-04-01

    Non-invasive and quantitative estimations for the delineation of sub-surface tumor margins could greatly aid in the early detection and monitoring of the morphological appearances of tumor growth, ensure complete tumor excision without the unnecessary sacrifice of healthy tissue, and facilitate post-operative follow-up for recurrence. In this study, a high-speed, non-invasive, and ultra-high-resolution spectral domain optical coherence tomography (UHR-SDOCT) imaging platform was developed for the quantitative measurement of human sub-surface skin mass. With a proposed robust, semi-automatic analysis, the system can rapidly quantify lesion area and shape regularity by an en-face-oriented algorithm. Various sizes of nylon sutures embedded in pork skin were used first as a phantom to verify the accuracy of our algorithm, and then in vivo, feasibility was proven using benign human angiomas and pigmented nevi. Clinically, this is the first step towards an automated skin lesion measurement system. In vivo optical coherence tomography (OCT) image of angioma (A). Thin red arrows point to a blood vessel (BV). PMID:25755214

  12. A first calibration of nonmarine ostracod species for the quantitative estimation of Pleistocene climate change in southern Africa

    NASA Astrophysics Data System (ADS)

    Horne, D. J.; Martens, K.

    2009-04-01

    Although qualitative statements have been made about general climatic conditions in southern Africa during the Pleistocene, there are few quantifiable palaeoclimatic data based on field evidence, especially regarding whether the area was wetter or drier during the Last Glacial Maximum. Such information is critical in validating models of climate change, both in spatial and temporal dimensions. As an essential preliminary step towards palaeoclimate reconstructions using fossil ostracods from cored lake sediment sequences, we have calibrated a training set of living ostracod species' distributions against a modern climate dataset and other available environmental data. The modern ostracod dataset is based on the collections in the Royal Belgian Institute of Natural Sciences in Brussels, which constitutes the most diverse and comprehensive collection of southern African nonmarine ostracods available anywhere in the world. To date, c. 150 nominal species have been described from southern Africa (Martens, 2001) out of c. 450 species in the total Afrotropical area (Martens et al., 2008). Here we discuss the potential value and limitations of the training set for the estimation of past climatic parameters including air temperature (July and January means, maxima and minima, Mean Annual Air Temperature), precipitation, water conductivity and pH. The next step will be to apply the Mutual Ostracod Temperature Range method (Horne, 2007; Horne & Mezquita, 2008) to the palaeoclimatic analysis of fossil ostracod assemblages from sequences recording the Last Glacial Maximum in southern Africa. Ultimately this work will contribute to the development of a glacier-climate modelling project based on evidence of former niche glaciation of the Drakensberg Escarpment. Horne, D. J. 2007. A Mutual Temperature Range method for Quaternary palaeoclimatic analysis using European nonmarine Ostracoda. Quaternary Science Reviews, 26, 1398-1415. Horne, D. J. & Mezquita, F. 2008. Palaeoclimatic

  13. Uptake and recycling of lead by boreal forest plants: Quantitative estimates from a site in northern Sweden

    NASA Astrophysics Data System (ADS)

    Klaminder, Jonatan; Bindler, Richard; Emteryd, Ove; Renberg, Ingemar

    2005-05-01

    As a consequence of deposition of atmospheric pollution, the lead concentration in the mor layer (the organic horizon) of remote boreal forest soils in Sweden is raised far above natural levels. How the mor will respond to decreased atmospheric pollution is not well known and is dependent on future deposition rates, downward migration losses and upward fluxes in the soil profile. Plants may contribute to the upward flux of lead by 'pumping' lead back to the mor surface through root uptake and subsequent litter fall. We use lead concentration and stable isotope ( 206Pb, 207Pb and 208Pb) measurements of forest vegetation to quantify plant uptake rates from the soil and direct from the atmosphere at two sites in northern Sweden; an undisturbed mature forest and a disturbed site with Scots pine ( Pinus sylvestris) growing on a recently exposed mineral soil (C-horizon) containing a minimum of atmospherically derived pollution lead. Analyses of forest mosses from a herbarium collection (spanning the last ˜100 yr) and soil matrix samples suggest that the atmospheric lead deposited on plants and soil has an average 206Pb/ 207Pb ratio of 1.15, while lead derived from local soil minerals has an average ratio of ˜1.47. Since the biomass of trees and field layer shrubs has an average 206Pb/ 207Pb ratio of ˜1.25, this indicates that 70% ± 10% of the inventory of 1 ± 0.8 mg Pb m -2 stored in plants in the mature forest originates from pollution. Needles, bark and apical stemwood of the pine growing on the disturbed soil, show lower 206Pb/ 207Pb ratios (as low as 1.21) than the roots and basal stemwood (having ratios > 1.36), which indicate that plants are able to incorporate lead directly from the atmosphere (˜50% of the total tree uptake). By partitioning the total uptake of lead into uptake from the atmosphere and different soil layers using an isotopic mixing model, we estimate that ˜0.03 ± 0.01, 0.02 ± 0.01 and 0.05 ± 0.01 mg Pb m -2 yr -1 (mean ± SD), is taken up

  14. Ecosystem services - from assessements of estimations to quantitative, validated, high-resolution, continental-scale mapping via airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Pfeifer, Norbert

    2016-04-01

    service potential" which is the ability of the local ecosystem to deliver various functions (water retention, carbon storage etc.), but can't quantify how much of these are actually used by humans or what the estimated monetary value is. Due to its ability to measure both terrain relief and vegetation structure in high resolution, airborne LIDAR supports direct quantification of the properties of an ecosystem that lead to it delivering a given service (such as biomass, water retention, micro-climate regulation or habitat diversity). In addition, its high resolution allows direct calibration with field measurements: routine harvesting-based ecological measurements, local biodiversity indicator surveys or microclimate recordings all take place at the human scale and can be directly linked to the local value of LIDAR-based indicators at meter resolution. Therefore, if some field measurements with standard ecological methods are performed on site, the accuracy of LIDAR-based ecosystem service indicators can be rigorously validated. With this conceptual and technical approach high resolution ecosystem service assessments can be made with well established credibility. These would consolidate the concept of ecosystem services and support both scientific research and evidence-based environmental policy at local and - as data coverage is continually increasing - continental scale.

  15. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    PubMed

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  16. Quantitative Estimation of the Metal-Induced Negative Oxide Charge Density in n-Type Silicon Wafers from Measurements of Frequency-Dependent AC Surface Photovoltage

    NASA Astrophysics Data System (ADS)

    Shimizu, Hirofumi; Shin, Ryuhei; Ikeda, Masanori

    2006-03-01

    A quantitative estimation of metal-induced oxide charge (Qmi) density is performed on the surface of n-type silicon (Si) wafers rinsed with trivalent aluminum (Al)- and iron (Fe)-contaminated RCA alkaline solution by analyzing the frequency-dependent AC surface photovoltage (SPV). Qmi arises from (AlOSi)- or (FeOSi)- networks in native oxide which are responsible for inducing negative oxide charge. On the basis of Munakata and Nishimatsu’s half-sided junction model [C. Munakata and S. Nishimatsu: Jpn. J. Appl. Phys. 25 (1986) 807], the network densities are estimated in depletion and/or weak inversion in which the cutoff frequencies of the frequency-dependent AC SPV curves are defined. It is found that the charge density Qmi increases with the time of exposure to air and it is calculated that about 4% of Al atoms in the native oxide are activated in the form of an (AlOSi)- network for 1 h of exposure. The (FeOSi)- network density is calculated as a function of Fe concentration. As a result, the frequency-dependent AC SPV measurements carried out here enable a successful evaluation of impurity level in a nondestructive and noncontact manner.

  17. Uncertainty Quantification for Quantitative Imaging Holdup Measurements

    SciTech Connect

    Bevill, Aaron M; Bledsoe, Keith C

    2016-01-01

    In nuclear fuel cycle safeguards, special nuclear material "held up" in pipes, ducts, and glove boxes causes significant uncertainty in material-unaccounted-for estimates. Quantitative imaging is a proposed non-destructive assay technique with potential to estimate the holdup mass more accurately and reliably than current techniques. However, uncertainty analysis for quantitative imaging remains a significant challenge. In this work we demonstrate an analysis approach for data acquired with a fast-neutron coded aperture imager. The work includes a calibrated forward model of the imager. Cross-validation indicates that the forward model predicts the imager data typically within 23%; further improvements are forthcoming. A new algorithm based on the chi-squared goodness-of-fit metric then uses the forward model to calculate a holdup confidence interval. The new algorithm removes geometry approximations that previous methods require, making it a more reliable uncertainty estimator.

  18. Quantitative assessment of future development of cooper/silver resources in the Kootenai National Forest, Idaho/Montana: Part I-Estimation of the copper and silver endowments

    USGS Publications Warehouse

    Spanski, G.T.

    1992-01-01

    Faced with an ever-increasing diversity of demand for the use of public lands, managers and planners are turning more often to a multiple-use approach to meet those demands. This approach requires the uses to be mutually compatible and to utilize the more valuable attributes or resource values of the land. Therefore, it is imperative that planners be provided with all available information on attribute and resource values in a timely fashion and in a format that facilitates a comparative evaluation. The Kootenai National Forest administration enlisted the U.S. Geological Survey and U.S. Bureau of Mines to perform a quantitative assessment of future copper/silver production potential within the forest from sediment-hosted copper deposits in the Revett Formation that are similar to those being mined at the Troy Mine near Spar Lake. The U.S. Geological Survey employed a quantitative assessment technique that compared the favorable host terrane in the Kootenai area with worldwide examples of known sediment-hosted copper deposits. The assessment produced probabilistic estimates of the number of undiscovered deposits that may be present in the area and of the copper and silver endowment that might be contained in them. Results of the assessment suggest that the copper/silver deposit potential is highest in the southwestern one-third of the forest. In this area there is an estimated 50 percent probability of at least 50 additional deposits occurring mostly within approximately 260,000 acres where the Revett Formation is thought to be present in the subsurface at depths of less than 1,500 meters. A Monte Carlo type simulation using data on the grade and tonnage characteristics of other known silver-rich, sediment-hosted copper deposits predicts a 50 percent probability that these undiscovered deposits will contain at least 19 million tonnes of copper and 100,000 tonnes of silver. Combined with endowments estimated for identified, but not thoroughly explored deposits, and

  19. Quantitative Estimation of Chemical Weathering versus Total Denudation Ratio within Tributaries of Yangtze River Basin Based on Size Dependent Chemical Composition Ratio of River Sediment

    NASA Astrophysics Data System (ADS)

    Kuboki, Y.; Chao, L.; Tada, R.; Saito, K.; Zheng, H.; Irino, T.; He, M.; Ke, W.; Suzuki, Y.

    2014-12-01

    Quantitative estimation of chemical weathering rate and evaluation of its controlling factors are critical to understand its role on landscape evolution and carbon cycle on a long time scale. In order to reconstruct the past changes in intensities of chemical weathering and erosion, it is necessary to establish a proxy for chemical versus physical weathering intensities based on chemical composition of sediments. However, the chemical composition of sediments is controlled not only by chemical weathering, but by type of source rock and grain size, too. This study aims to develop a method to quantitatively evaluate the contribution of chemical weathering relative to total denudation in the entire Yangtze River basin based on chemical composition of three different grain size fractions of river sediments. Chemical compositions of three different grain size fractions, and grain size distribution of suspended particles and river bed sediments as well as chemical composition of dissolved materials of water samples are analyzed. The result revealed that suspended particles and river bed sediments are composed of three components, aluminosilicate, quartz, and carbonate. K/Al is smaller in the smallest size fraction. We preliminary interpret that original composition of aluminosilcates within different size fractions of the same sample is the same, and the decrease in K/Al with decreasing grain size would reflect increasing influence of chemical weathering. If correct, K/Al of fine to coarse fraction can be used as an index of chemical weathering intensity. To test this idea, we examined the relationship between K/Al of fine to coarse fraction and the ratio of chemical weathering contribution to total denudation rate based on observational data. The result will be presented and its implication will be discussed.

  20. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  1. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  2. Biotransformation of dichlorodiphenyltrichloroethane in the benthic polychaete, Nereis succinea: quantitative estimation by analyzing the partitioning of chemicals between gut fluid and lipid.

    PubMed

    Wang, Fei; Pei, Yuan-yuan; You, Jing

    2015-02-01

    Biotransformation plays an important role in the bioaccumulation and toxicity of a chemical in biota. Dichlorodiphenyltrichloroethane (DDT) commonly co-occurs with its metabolites (dichlorodiphenyldichloroethane [DDD] and dichlorodiphenyldichloroethylene [DDE]), in the environment; thus it is a challenge to accurately quantify the biotransformation rates of DDT and distinguish the sources of the accumulated metabolites in an organism. The present study describes a method developed to quantitatively analyze the biotransformation of p,p'-DDT in the benthic polychaete, Nereis succinea. The lugworms were exposed to sediments spiked with DDT at various concentrations for 28 d. Degradation of DDT to DDD and DDE occurred in sediments during the aging period, and approximately two-thirds of the DDT remained in the sediment. To calculate the biotransformation rates, residues of individual compounds measured in the bioaccumulation testing (after biotransformation) were compared with residues predicted by analyzing the partitioning of the parent and metabolite compounds between gut fluid and tissue lipid (before biotransformation). The results suggest that sediment ingestion rates decreased when DDT concentrations in sediment increased. Extensive biotransformation of DDT occurred in N. succinea, with 86% of DDT being metabolized to DDD and <2% being transformed to DDE. Of the DDD that accumulated in the lugworms, approximately 70% was the result of DDT biotransformation, and the remaining 30% was from direct uptake of sediment-associated DDD. In addition, the biotransformation was not dependent on bulk sediment concentrations, but rather on bioaccessible concentrations of the chemicals in sediment, which were quantified by gut fluid extraction. The newly established method improved the accuracy of prediction of the bioaccumulation and toxicity of DDTs. PMID:25470143

  3. Towards cheminformatics-based estimation of drug therapeutic index: Predicting the protective index of anticonvulsants using a new quantitative structure-index relationship approach.

    PubMed

    Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung

    2016-06-01

    The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates. PMID:27262528

  4. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  5. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  6. A Quantitative Assessment of the Size-Frequency Distribution of Terrestrial Dust Devils, Comparison with Qualitative Estimates, and Applications to Mars

    NASA Astrophysics Data System (ADS)

    Pathare, A.; Balme, M. R.; Metzger, S.; Towner, M.; Spiga, A.; Renno, N. O.; Elliott, H. M.; Russell, P. S.; Fenton, L. K.; Michaels, T. I.

    2011-12-01

    Dust devils are particle-loaded vertical convective vortices commonly observed on Earth and especially Mars. Qualitative estimates of terrestrial dust devil frequency based upon visual field surveys have varied by several orders of magnitude. We will present the results of our quantitative characterization of the size-frequency distribution (SFD) of terrestrial dust devils, which utilizes stereo photography to calculate dust devil diameters via parallax displacement. In 2009, we conducted field campaigns in Eloy, Arizona and Eldorado Valley, Nevada to survey terrestrial dust devils: the latter site was revisited in 2010. During each survey period, at least two and usually three observers were positioned at spotter stations located approximately 100 m apart, thereby allowing triangular study areas (bounded by three meteorological masts) of A = 0.83 sq. km and A = 0.55 sq. km to be surveyed in Eloy and Eldorado Valley, respectively. Each spotter station was equipped with a tripod-mounted, weatherproof digital camera: whenever possible, any dust devils observed within the study area were photographed simultaneously by camera operators in radio contact. All dust devils observed within the survey sites were assigned a qualitative diameter estimate (i.e., Tiny/Small/Medium/Large) by a third spotter positioned near the center of the study area. Thus even if small dust devils occurred that existed too fleetingly to be photographed, they were still recorded. Methodology: The positions of both survey tripods were measured to ~ 0.5 m precision using GPS. In addition, a full 360-degree panorama was generated from each survey position, corrected for lens distortion, and then imported into a GIS. The photographs of dust devils from each camera are then also incorporated into the GIS and aligned against the corresponding background panorama. The width and center points of each dust devil are then digitized and its bearings and angular width outputted from the GIS, together with

  7. Accurate 3D quantification of the bronchial parameters in MDCT

    NASA Astrophysics Data System (ADS)

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  8. Discordance between Prevalent Vertebral Fracture and Vertebral Strength Estimated by the Finite Element Method Based on Quantitative Computed Tomography in Patients with Type 2 Diabetes Mellitus

    PubMed Central

    2015-01-01

    Background Bone fragility is increased in patients with type 2 diabetes mellitus (T2DM), but a useful method to estimate bone fragility in T2DM patients is lacking because bone mineral density alone is not sufficient to assess the risk of fracture. This study investigated the association between prevalent vertebral fractures (VFs) and the vertebral strength index estimated by the quantitative computed tomography-based nonlinear finite element method (QCT-based nonlinear FEM) using multi-detector computed tomography (MDCT) for clinical practice use. Research Design and Methods A cross-sectional observational study was conducted on 54 postmenopausal women and 92 men over 50 years of age, all of whom had T2DM. The vertebral strength index was compared in patients with and without VFs confirmed by spinal radiographs. A standard FEM procedure was performed with the application of known parameters for the bone material properties obtained from nondiabetic subjects. Results A total of 20 women (37.0%) and 39 men (42.4%) with VFs were identified. The vertebral strength index was significantly higher in the men than in the women (P<0.01). Multiple regression analysis demonstrated that the vertebral strength index was significantly and positively correlated with the spinal bone mineral density (BMD) and inversely associated with age in both genders. There were no significant differences in the parameters, including the vertebral strength index, between patients with and without VFs. Logistic regression analysis adjusted for age, spine BMD, BMI, HbA1c, and duration of T2DM did not indicate a significant relationship between the vertebral strength index and the presence of VFs. Conclusion The vertebral strength index calculated by QCT-based nonlinear FEM using material property parameters obtained from nondiabetic subjects, whose risk of fracture is lower than that of T2DM patients, was not significantly associated with bone fragility in patients with T2DM. This discordance

  9. Improving Quantitative Precipitation Estimation via Data Fusion of High-Resolution Ground-based Radar Network and CMORPH Satellite-based Product

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Chen, H.; Chandrasekar, V.; Xie, P.

    2015-12-01

    A large number of precipitation products at multi-scales have been developed based upon satellite, radar, and/or rain gauge observations. However, how to produce optimal rainfall estimation for a given region is still challenging due to the spatial and temporal sampling difference of different sensors. In this study, we develop a data fusion mechanism to improve regional quantitative precipitation estimation (QPE) by utilizing satellite-based CMORPH product, ground radar measurements, as well as numerical model simulations. The CMORPH global precipitation product is essentially derived based on retrievals from passive microwave measurements and infrared observations onboard satellites (Joyce et al. 2004). The fine spatial-temporal resolution of 0.05o Lat/Lon and 30-min is appropriate for regional hydrologic and climate studies. However, it is inadequate for localized hydrometeorological applications such as urban flash flood forecasting. Via fusion of the Regional CMORPH product and local precipitation sensors, the high-resolution QPE performance can be improved. The area of interest is the Dallas-Fort Worth (DFW) Metroplex, which is the largest land-locked metropolitan area in the U.S. In addition to an NWS dual-polarization S-band WSR-88DP radar (i.e., KFWS radar), DFW hosts the high-resolution dual-polarization X-band radar network developed by the center for Collaborative Adaptive Sensing of the Atmosphere (CASA). This talk will present a general framework of precipitation data fusion based on satellite and ground observations. The detailed prototype architecture of using regional rainfall instruments to improve regional CMORPH precipitation product via multi-scale fusion techniques will also be discussed. Particularly, the temporal and spatial fusion algorithms developed for the DFW Metroplex will be described, which utilizes CMORPH product, S-band WSR-88DP, and X-band CASA radar measurements. In order to investigate the uncertainties associated with each

  10. Quantitative Estimation of the Number of Contaminated Hatching Eggs Released from an Infected, Undetected Turkey Breeder Hen Flock During a Highly Pathogenic Avian Influenza Outbreak.

    PubMed

    Malladi, Sasidhar; Weaver, J Todd; Alexander, Catherine Y; Middleton, Jamie L; Goldsmith, Timothy J; Snider, Timothy; Tilley, Becky J; Gonder, Eric; Hermes, David R; Halvorson, David A

    2015-09-01

    The regulatory response to an outbreak of highly pathogenic avian influenza (HPAI) in the United States may involve quarantine and stop movement orders that have the potential to disrupt continuity of operations in the U.S. turkey industry--particularly in the event that an uninfected breeder flock is located within an HPAI Control Area. A group of government-academic-industry leaders developed an approach to minimize the unintended consequences associated with outbreak response, which incorporates HPAI control measures to be implemented prior to moving hatching eggs off of the farm. Quantitative simulation models were used to evaluate the movement of potentially contaminated hatching eggs from a breeder henhouse located in an HPAI Control Area, given that active surveillance testing, elevated biosecurity, and a 2-day on-farm holding period were employed. The risk analysis included scenarios of HPAI viruses differing in characteristics as well as scenarios in which infection resulted from artificial insemination. The mean model-predicted number of internally contaminated hatching eggs released per movement from an HPAI-infected turkey breeder henhouse ranged from 0 to 0.008 under the four scenarios evaluated. The results indicate a 95% chance of no internally contaminated eggs being present per movement from an infected house before detection. Sensitivity analysis indicates that these results are robust to variation in key transmission model parameters within the range of their estimates from available literature. Infectious birds at the time of egg collection are a potential pathway of external contamination for eggs stored and then moved off of the farm; the predicted number of such infectious birds was estimated to be low. To date, there has been no evidence of vertical transmission of HPAI virus or low pathogenic avian influenza virus to day-old poults from hatching eggs originating from infected breeders. The application of risk analysis methods was beneficial

  11. High-Performance Liquid Chromatographic and High-Performance Thin-Layer Chromatographic Method for the Quantitative Estimation of Dolutegravir Sodium in Bulk Drug and Pharmaceutical Dosage Form.

    PubMed

    Bhavar, Girija B; Pekamwar, Sanjay S; Aher, Kiran B; Thorat, Ravindra S; Chaudhari, Sanjay R

    2016-01-01

    Simple, sensitive, precise, and specific high-performance liquid chromategraphic (HPLC) and high-performance thin-layer chromatographic (HPTLC) methods for the determination of dolutegravir sodium in bulk drug and pharmaceutical dosage form were developed and validated. In the HPLC method, analysis of the drug was carried out on the ODS C18 column (150 × 4.6 mm, 5 μm particle size) using a mixture of acetonitrile: water (pH 7.5) in the ratio of 80:20 v/v as the mobile phase at the flow rate 1 mL/min at 260 nm. This method was found to be linear in the concentration range of 5-35 μg/mL. The peak for dolutegravir sodium was observed at 3.0 ± 0.1 minutes. In the HPTLC method, analysis was performed on aluminum-backed plates pre-coated with silica gel G60 F254 using methanol: chloroform: formic acid in the proportion of 8:2:0.5 v/v/v as the mobile phase. This solvent system was found to give compact spots for dolutegravir sodium with the Rf value 0.77 ± 0.01. Densitometric analysis of dolutegravir sodium was carried out in the absorbance mode at 265 nm. Linear regression analysis showed good linearity with respect to peak area in the concentration range of 200-900 ng/spot. The methods were validated for precision, limit of detection (LOD), limit of quantitation (LOQ), accuracy, and specificity. Statistical analysis showed that both of the methods are repeatable and specific for the estimation of the said drug. The methods can be used for routine quality control analysis of dolutegravir sodium. PMID:27222606

  12. High-Performance Liquid Chromatographic and High-Performance Thin-Layer Chromatographic Method for the Quantitative Estimation of Dolutegravir Sodium in Bulk Drug and Pharmaceutical Dosage Form

    PubMed Central

    Bhavar, Girija B.; Pekamwar, Sanjay S.; Aher, Kiran B.; Thorat, Ravindra S.; Chaudhari, Sanjay R.

    2016-01-01

    Simple, sensitive, precise, and specific high-performance liquid chromategraphic (HPLC) and high-performance thin-layer chromatographic (HPTLC) methods for the determination of dolutegravir sodium in bulk drug and pharmaceutical dosage form were developed and validated. In the HPLC method, analysis of the drug was carried out on the ODS C18 column (150 × 4.6 mm, 5 μm particle size) using a mixture of acetonitrile: water (pH 7.5) in the ratio of 80:20 v/v as the mobile phase at the flow rate 1 mL/min at 260 nm. This method was found to be linear in the concentration range of 5–35 μg/mL. The peak for dolutegravir sodium was observed at 3.0 ± 0.1 minutes. In the HPTLC method, analysis was performed on aluminum-backed plates pre-coated with silica gel G60 F254 using methanol: chloroform: formic acid in the proportion of 8:2:0.5 v/v/v as the mobile phase. This solvent system was found to give compact spots for dolutegravir sodium with the Rf value 0.77 ± 0.01. Densitometric analysis of dolutegravir sodium was carried out in the absorbance mode at 265 nm. Linear regression analysis showed good linearity with respect to peak area in the concentration range of 200–900 ng/spot. The methods were validated for precision, limit of detection (LOD), limit of quantitation (LOQ), accuracy, and specificity. Statistical analysis showed that both of the methods are repeatable and specific for the estimation of the said drug. The methods can be used for routine quality control analysis of dolutegravir sodium. PMID:27222606

  13. Rapid Quantitative Estimation of Chlorinated Methane Utilizing Bacteria in Drinking Water and the Effect of Nanosilver on Biodegradation of the Trichloromethane in the Environment

    PubMed Central

    Zamani, Isaac; Bouzari, Majid; Emtiazi, Giti; Fanaei, Maryam

    2015-01-01

    Background: Halomethanes are toxic and carcinogenic chemicals, which are widely used in industry. Also they can be formed during water disinfection by chlorine. Biodegradation by methylotrophs is the most important way to remove these pollutants from the environment. Objectives: This study aimed to represent a simple and rapid method for quantitative study of halomethanes utilizing bacteria in drinking water and also a method to facilitate the biodegradation of these compounds in the environment compared to cometabolism. Materials and Methods: Enumeration of chlorinated methane utilizing bacteria in drinking water was carried out by most probable number (MPN) method in two steps. First, the presence and the number of methylotroph bacteria were confirmed on methanol-containing medium. Then, utilization of dichloromethane was determined by measuring the released chloride after the addition of 0.04 mol/L of it to the growth medium. Also, the effect of nanosilver particles on biodegradation of multiple chlorinated methanes was studied by bacterial growth on Bushnell-Haas Broth containing chloroform (trichloromethane) that was treated with 0.2 ppm nanosilver. Results: Most probable number of methylotrophs and chlorinated methane utilizing bacteria in tested drinking water were 10 and 4 MPN Index/L, respectively. Chloroform treatment by nanosilver leads to dechlorination and the production of formaldehyde. The highest growth of bacteria and formic acid production were observed in the tubes containing 1% chloroform treated with nanosilver. Conclusions: By combining the two tests, a rapid approach to estimation of most probable number of chlorinated methane utilizing bacteria is introduced. Treatment by nanosilver particles was resulted in the easier and faster biodegradation of chloroform by bacteria. Thus, degradation of these chlorinated compounds is more efficient compared to cometabolism. PMID:25834716

  14. Recapturing Quantitative Biology.

    ERIC Educational Resources Information Center

    Pernezny, Ken; And Others

    1996-01-01

    Presents a classroom activity on estimating animal populations. Uses shoe boxes and candies to emphasize the importance of mathematics in biology while introducing the methods of quantitative ecology. (JRH)

  15. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  16. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  17. [Quantitative ultrasound].

    PubMed

    Barkmann, R; Glüer, C-C

    2006-10-01

    Methods of quantitative ultrasound (QUS) can be used to obtain knowledge about bone fragility. Comprehensive study results exist showing the power of QUS for the estimation of osteoporotic fracture risk. Nevertheless, the variety of technologies, devices, and variables as well as different degrees of validation of the single devices have to be taken into account. Using methods to simulate ultrasound propagation, the complex interaction between ultrasound and bone could be understood and the propagation could be visualized. Preceding widespread clinical use, it has to be clarified if patients with low QUS values will profit from therapy, as it has been shown for DXA. Moreover, the introduction of quality assurance measures is essential. The user should know the limitations of the methods and be able to interpret the results correctly. Applied in an adequate manner QUS methods could then, due to lower costs and absence of ionizing radiation, become important players in osteoporosis management. PMID:16896637

  18. Optimal estimator for tomographic fluorescence lifetime multiplexing

    PubMed Central

    Hou, Steven S.; Bacskai, Brian J.; Kumar, Anand T. N.

    2016-01-01

    We use the model resolution matrix to analytically derive an optimal Bayesian estimator for multiparameter inverse problems that simultaneously minimizes inter-parameter cross talk and the total reconstruction error. Application of this estimator to time-domain diffuse fluorescence imaging shows that the optimal estimator for lifetime multiplexing is identical to a previously developed asymptotic time-domain (ATD) approach, except for the inclusion of a diagonal regularization term containing decay amplitude uncertainties. We show that, while the optimal estimator and ATD provide zero cross talk, the optimal estimator provides lower reconstruction error, while ATD results in superior relative quantitation. The framework presented here is generally applicable to other multiplexing problems where the simultaneous and accurate relative quantitation of multiple parameters is of interest. PMID:27192234

  19. Quantitative Estimates of Temporal Mixing across a 4th-order Depositional Sequence: Variation in Time-averaging along the Holocene Marine Succession of the Po Plain, Italy

    NASA Astrophysics Data System (ADS)

    Scarponi, D.; Kaufman, D.; Bright, J.; Kowalewski, M.

    2009-04-01

    Single fossiliferous beds contain biotic remnants that commonly vary in age over a time span of hundreds to thousands of years. Multiple recent studies suggest that such temporal mixing is a widespread phenomenon in marine depositional systems. This research focuses on quantitative estimates of temporal mixing obtained by direct dating of individual corbulid bivalve shells (Lentidium mediterraneum and Corbula gibba) from Po plain marine units of the Holocene 4th-order depositional sequence, including Transgressive Systems Tract [TST] and Highstand Systems Tract [HST]. These units displays a distinctive succession of facies consisting of brackish to marginal marine retrogradational deposits, (early TST), overlain by fully marine fine to coarse gray sands (late TST), and capped with progradational deltaic clays and sands (HST). More than 300 corbulid specimens, representing 19 shell-rich horizons evenly distributed along the depositional sequence and sampled from 9 cores, have been dated by means of aspartic acid racemization calibrated using 23 AMS-radiocarbon dates (14 dates for Lentidium mediterraneum and 9 dates for Corbula gibba, respectively). The results indicate that the scale of time-averaging is comparable when similar depositional environments from the same systems tract are compared across cores. However, time averaging is notably different when similar depositional environments from TST and HST segments of the sequence are compared. Specifically, late HST horizons (n=8) display relatively low levels of time-averaging: the mean within-horizon range of shell ages is 537 years and standard deviation averages 165 years. In contrast, late TST horizons (n=7) are dramatically more time-averaged: mean range of 5104 years and mean standard deviations of 1420 years. Thus, late TST horizons experience a 1 order of magnitude higher time-averaging than environmentally comparable late HST horizons. In conclusion the HST and TST systems tracts of the Po Plain display

  20. Quantitative Graphics in Newspapers.

    ERIC Educational Resources Information Center

    Tankard, James W., Jr.

    The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…

  1. Quantitative Photoacoustic Image Reconstruction using Fluence Dependent Chromophores

    PubMed Central

    Cox, B.T.; Laufer, J.G.; Beard, P.C.

    2010-01-01

    In biomedical photoacoustic imaging the images are proportional to the absorbed optical energy density, and not the optical absorption, which makes it difficult to obtain a quantitatively accurate image showing the concentration of a particular absorbing chromophore from photoacoustic measurements alone. Here it is shown that the spatially varying concentration of a chromophore whose absorption becomes zero above a threshold light fluence can be estimated from photoacoustic images obtained at increasing illumination strengths. This technique provides an alternative to model-based multiwavelength approaches to quantitative photoacoustic imaging, and a new approach to photoacoustic molecular and functional imaging. PMID:21258458

  2. Lower reference limits of quantitative cord glucose-6-phosphate dehydrogenase estimated from healthy term neonates according to the clinical and laboratory standards institute guidelines: a cross sectional retrospective study

    PubMed Central

    2013-01-01

    Background Previous studies have reported the lower reference limit (LRL) of quantitative cord glucose-6-phosphate dehydrogenase (G6PD), but they have not used approved international statistical methodology. Using common standards is expecting to yield more true findings. Therefore, we aimed to estimate LRL of quantitative G6PD detection in healthy term neonates by using statistical analyses endorsed by the International Federation of Clinical Chemistry (IFCC) and the Clinical and Laboratory Standards Institute (CLSI) for reference interval estimation. Methods This cross sectional retrospective study was performed at King Abdulaziz Hospital, Saudi Arabia, between March 2010 and June 2012. The study monitored consecutive neonates born to mothers from one Arab Muslim tribe that was assumed to have a low prevalence of G6PD-deficiency. Neonates that satisfied the following criteria were included: full-term birth (37 weeks); no admission to the special care nursery; no phototherapy treatment; negative direct antiglobulin test; and fathers of female neonates were from the same mothers’ tribe. The G6PD activity (Units/gram Hemoglobin) was measured spectrophotometrically by an automated kit. This study used statistical analyses endorsed by IFCC and CLSI for reference interval estimation. The 2.5th percentiles and the corresponding 95% confidence intervals (CI) were estimated as LRLs, both in presence and absence of outliers. Results 207 males and 188 females term neonates who had cord blood quantitative G6PD testing met the inclusion criteria. Method of Horn detected 20 G6PD values as outliers (8 males and 12 females). Distributions of quantitative cord G6PD values exhibited a normal distribution in absence of the outliers only. The Harris-Boyd method and proportion criteria revealed that combined gender LRLs were reliable. The combined bootstrap LRL in presence of the outliers was 10.0 (95% CI: 7.5-10.7) and the combined parametric LRL in absence of the outliers was 11

  3. Quantitative film radiography

    SciTech Connect

    Devine, G.; Dobie, D.; Fugina, J.; Hernandez, J.; Logan, C.; Mohr, P.; Moss, R.; Schumacher, B.; Updike, E.; Weirup, D.

    1991-02-26

    We have developed a system of quantitative radiography in order to produce quantitative images displaying homogeneity of parts. The materials that we characterize are synthetic composites and may contain important subtle density variations not discernible by examining a raw film x-radiograph. In order to quantitatively interpret film radiographs, it is necessary to digitize, interpret, and display the images. Our integrated system of quantitative radiography displays accurate, high-resolution pseudo-color images in units of density. We characterize approximately 10,000 parts per year in hundreds of different configurations and compositions with this system. This report discusses: the method; film processor monitoring and control; verifying film and processor performance; and correction of scatter effects.

  4. Does the Spectrum model accurately predict trends in adult mortality? Evaluation of model estimates using empirical data from a rural HIV community cohort study in north-western Tanzania

    PubMed Central

    Michael, Denna; Kanjala, Chifundo; Calvert, Clara; Pretorius, Carel; Wringe, Alison; Todd, Jim; Mtenga, Balthazar; Isingo, Raphael; Zaba, Basia; Urassa, Mark

    2014-01-01

    Introduction Spectrum epidemiological models are used by UNAIDS to provide global, regional and national HIV estimates and projections, which are then used for evidence-based health planning for HIV services. However, there are no validations of the Spectrum model against empirical serological and mortality data from populations in sub-Saharan Africa. Methods Serologic, demographic and verbal autopsy data have been regularly collected among over 30,000 residents in north-western Tanzania since 1994. Five-year age-specific mortality rates (ASMRs) per 1,000 person years and the probability of dying between 15 and 60 years of age (45Q15,) were calculated and compared with the Spectrum model outputs. Mortality trends by HIV status are shown for periods before the introduction of antiretroviral therapy (1994–1999, 2000–2005) and the first 5 years afterwards (2005–2009). Results Among 30–34 year olds of both sexes, observed ASMRs per 1,000 person years were 13.33 (95% CI: 10.75–16.52) in the period 1994–1999, 11.03 (95% CI: 8.84–13.77) in 2000–2004, and 6.22 (95% CI; 4.75–8.15) in 2005–2009. Among the same age group, the ASMRs estimated by the Spectrum model were 10.55, 11.13 and 8.15 for the periods 1994–1999, 2000–2004 and 2005–2009, respectively. The cohort data, for both sexes combined, showed that the 45Q15 declined from 39% (95% CI: 27–55%) in 1994 to 22% (95% CI: 17–29%) in 2009, whereas the Spectrum model predicted a decline from 43% in 1994 to 37% in 2009. Conclusion From 1994 to 2009, the observed decrease in ASMRs was steeper in younger age groups than that predicted by the Spectrum model, perhaps because the Spectrum model under-estimated the ASMRs in 30–34 year olds in 1994–99. However, the Spectrum model predicted a greater decrease in 45Q15 mortality than observed in the cohort, although the reasons for this over-estimate are unclear. PMID:24438873

  5. Foucault test: a quantitative evaluation method.

    PubMed

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams. PMID:27505659

  6. Doppler derived quantitative flow estimate in coronary artery bypass graft: a computational multiscale model for the evaluation of the current clinical procedure.

    PubMed

    Ponzini, Raffaele; Lemma, Massimo; Morbiducci, Umberto; Montevecchi, Franco M; Redaelli, Alberto

    2008-09-01

    In order to investigate the reliability of the so called mean velocity/vessel area formula adopted in clinical practice for the estimation of the flow rate using an intravascular Doppler guide wire instrumentation, a multiscale computational model was used to give detailed predictions on flow profiles within Y-shaped coronary artery bypass graft (CABG) models. At this purpose three CABG models were built from clinical patient's data and used to evaluate and compare, in each model, the computed flow rate and the flow rate estimated according to the assumption of parabolic velocity profile. A consistent difference between the exact and the estimated value of the flow rate was found in every branch of all the graft models. In this study we showed that this discrepancy in the flow rate estimation is coherent to the theory of Womersley regarding spatial velocity profiles in unsteady flow conditions. In particular this work put in evidence that the error in flow rate estimation can be reduced by using the estimation formula recently proposed by Ponzini et al. [Ponzini R, Vergara C, Redaelli A, Veneziani A. Reliable CFD-based estimation of flow rate in haemodynamics measures. Ultrasound Med Biol 2006;32(10):1545-55], accounting for the unsteady nature of blood, applicable in the clinical practice without resorting to further measurements. PMID:17980641

  7. Estimating potential evapotranspiration with improved radiation estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...

  8. Estimation of environmental properties for inorganic compounds using LSER

    USGS Publications Warehouse

    Hickey, James P.

    1999-01-01

    The Great Lakes Science Center has devised values for inorganic species for use in the environmental property- predictive quantitative structure-activity relationships (QSAR) Linear Solvation Energy Relationship (LSER). Property estimation has been difficult for inorganic species. In this presentation aqueous solubility, bioconcentration and acute aquatic toxicity are estimated for inorganic compounds using existing LSER equations. The best estimations arise from the most accurate description of predominant solution species, many within an order of magnitude. The toxicities also depend on an estimation of the bioactive amount and configuration. A number of anion/cation combinations (salts) still resist accurate property estimation, and the reasons currently are not understood. These new variable values will greatly extend the application and utility of LSER for the estimation of environmental properties.

  9. Teleseismic Lg of Semipalatinsk and Novaya Zemlya Nuclear Explosions Recorded by the GRF (Gräfenberg) Array: Comparison with Regional Lg (BRV) and their Potential for Accurate Yield Estimation

    NASA Astrophysics Data System (ADS)

    Schlittenhardt, J.

    - A comparison of regional and teleseismic log rms (root-mean-square) Lg amplitude measurements have been made for 14 underground nuclear explosions from the East Kazakh test site recorded both by the BRV (Borovoye) station in Kazakhstan and the GRF (Gräfenberg) array in Germany. The log rms Lg amplitudes observed at the BRV regional station at a distance of 690km and at the teleseismic GRF array at a distance exceeding 4700km show very similar relative values (standard deviation 0.048 magnitude units) for underground explosions of different sizes at the Shagan River test site. This result as well as the comparison of BRV rms Lg magnitudes (which were calculated from the log rms amplitudes using an appropriate calibration) with magnitude determinations for P waves of global seismic networks (standard deviation 0.054 magnitude units) point to a high precision in estimating the relative source sizes of explosions from Lg-based single station data. Similar results were also obtained by other investigators (Patton, 1988; Ringdaletal., 1992) using Lg data from different stations at different distances.Additionally, GRF log rms Lg and P-coda amplitude measurements were made for a larger data set from Novaya Zemlya and East Kazakh explosions, which were supplemented with mb(Lg) amplitude measurements using a modified version of Nuttli's (1973, 1986a) method. From this test of the relative performance of the three different magnitude scales, it was found that the Lg and P-coda based magnitudes performed equally well, whereas the modified Nuttli mb(Lg) magnitudes show greater scatter when compared to the worldwide mb reference magnitudes. Whether this result indicates that the rms amplitude measurements are superior to the zero-to-peak amplitude measurement of a single cycle used for the modified Nuttli method, however, cannot be finally assessed, since the calculated mb(Lg) magnitudes are only preliminary until appropriate attenuation corrections are available for the

  10. Isolation and Quantitative Estimation of Diesel Exhaust and Carbon Black Particles Ingested by Lung Epithelial Cells and Alveolar Macrophages In Vitro

    EPA Science Inventory

    A new procedure for isolating and estimating ingested carbonaceous diesel exhaust particles (DEP) or carbon black (CB) particles by lung epithelial cells and macrophages is described. Cells were incubated with DEP or CB to examine cell-particle interaction and ingestion. After va...

  11. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  12. Quantitative fibrosis estimation