A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS
While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...
McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A
2016-08-01
Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.
Quantitative Analysis of Radar Returns from Insects
NASA Technical Reports Server (NTRS)
Riley, J. R.
1979-01-01
When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.
A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.
Distinguishing ferritin from apoferritin using magnetic force microscopy
NASA Astrophysics Data System (ADS)
Nocera, Tanya M.; Zeng, Yuzhi; Agarwal, Gunjan
2014-11-01
Estimating the amount of iron-replete ferritin versus iron-deficient apoferritin proteins is important in biomedical and nanotechnology applications. This work introduces a simple and novel approach to quantify ferritin by using magnetic force microscopy (MFM). We demonstrate how high magnetic moment probes enhance the magnitude of MFM signal, thus enabling accurate quantitative estimation of ferritin content in ferritin/apoferritin mixtures in vitro. We envisage MFM could be adapted to accurately determine ferritin content in protein mixtures or in small aliquots of clinical samples.
Lau, Darryl; Hervey-Jumper, Shawn L; Han, Seunggu J; Berger, Mitchel S
2018-05-01
OBJECTIVE There is ample evidence that extent of resection (EOR) is associated with improved outcomes for glioma surgery. However, it is often difficult to accurately estimate EOR intraoperatively, and surgeon accuracy has yet to be reviewed. In this study, the authors quantitatively assessed the accuracy of intraoperative perception of EOR during awake craniotomy for tumor resection. METHODS A single-surgeon experience of performing awake craniotomies for tumor resection over a 17-year period was examined. Retrospective review of operative reports for quantitative estimation of EOR was recorded. Definitive EOR was based on postoperative MRI. Analysis of accuracy of EOR estimation was examined both as a general outcome (gross-total resection [GTR] or subtotal resection [STR]), and quantitatively (5% within EOR on postoperative MRI). Patient demographics, tumor characteristics, and surgeon experience were examined. The effects of accuracy on motor and language outcomes were assessed. RESULTS A total of 451 patients were included in the study. Overall accuracy of intraoperative perception of whether GTR or STR was achieved was 79.6%, and overall accuracy of quantitative perception of resection (within 5% of postoperative MRI) was 81.4%. There was a significant difference (p = 0.049) in accuracy for gross perception over the 17-year period, with improvement over the later years: 1997-2000 (72.6%), 2001-2004 (78.5%), 2005-2008 (80.7%), and 2009-2013 (84.4%). Similarly, there was a significant improvement (p = 0.015) in accuracy of quantitative perception of EOR over the 17-year period: 1997-2000 (72.2%), 2001-2004 (69.8%), 2005-2008 (84.8%), and 2009-2013 (93.4%). This improvement in accuracy is demonstrated by the significantly higher odds of correctly estimating quantitative EOR in the later years of the series on multivariate logistic regression. Insular tumors were associated with the highest accuracy of gross perception (89.3%; p = 0.034), but lowest accuracy of quantitative perception (61.1% correct; p < 0.001) compared with tumors in other locations. Even after adjusting for surgeon experience, this particular trend for insular tumors remained true. The absence of 1p19q co-deletion was associated with higher quantitative perception accuracy (96.9% vs 81.5%; p = 0.051). Tumor grade, recurrence, diagnosis, and isocitrate dehydrogenase-1 (IDH-1) status were not associated with accurate perception of EOR. Overall, new neurological deficits occurred in 8.4% of cases, and 42.1% of those new neurological deficits persisted after the 3-month follow-up. Correct quantitative perception was associated with lower postoperative motor deficits (2.4%) compared with incorrect perceptions (8.0%; p = 0.029). There were no detectable differences in language outcomes based on perception of EOR. CONCLUSIONS The findings from this study suggest that there is a learning curve associated with the ability to accurately assess intraoperative EOR during glioma surgery, and it may take more than a decade to be truly proficient. Understanding the factors associated with this ability to accurately assess EOR will provide safer surgeries while maximizing tumor resection.
Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli
2015-01-01
In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.
Sasaki, Tomonari; Tahira, Tomoko; Suzuki, Akari; Higasa, Koichiro; Kukita, Yoji; Baba, Shingo; Hayashi, Kenshi
2001-01-01
We show that single-nucleotide polymorphisms (SNPs) of moderate to high heterozygosity (minor allele frequencies >10%) can be efficiently detected, and their allele frequencies accurately estimated, by pooling the DNA samples and applying a capillary-based SSCP analysis. In this method, alleles are separated into peaks, and their frequencies can be reliably and accurately quantified from their peak heights (SD <1.8%). We found that as many as 40% of publicly available SNPs that were analyzed by this method have widely differing allele frequency distributions among groups of different ethnicity (parents of Centre d'Etude Polymorphisme Humaine families vs. Japanese individuals). These results demonstrate the effectiveness of the present pooling method in the reevaluation of candidate SNPs that have been collected by examination of limited numbers of individuals. The method should also serve as a robust quantitative technique for studies in which a precise estimate of SNP allele frequencies is essential—for example, in linkage disequilibrium analysis. PMID:11083945
Genomic Quantitative Genetics to Study Evolution in the Wild.
Gienapp, Phillip; Fior, Simone; Guillaume, Frédéric; Lasky, Jesse R; Sork, Victoria L; Csilléry, Katalin
2017-12-01
Quantitative genetic theory provides a means of estimating the evolutionary potential of natural populations. However, this approach was previously only feasible in systems where the genetic relatedness between individuals could be inferred from pedigrees or experimental crosses. The genomic revolution opened up the possibility of obtaining the realized proportion of genome shared among individuals in natural populations of virtually any species, which could promise (more) accurate estimates of quantitative genetic parameters in virtually any species. Such a 'genomic' quantitative genetics approach relies on fewer assumptions, offers a greater methodological flexibility, and is thus expected to greatly enhance our understanding of evolution in natural populations, for example, in the context of adaptation to environmental change, eco-evolutionary dynamics, and biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie
2012-06-01
Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.
Global, long-term surface reflectance records from Landsat
USDA-ARS?s Scientific Manuscript database
Global, long-term monitoring of changes in Earth’s land surface requires quantitative comparisons of satellite images acquired under widely varying atmospheric conditions. Although physically based estimates of surface reflectance (SR) ultimately provide the most accurate representation of Earth’s s...
Revisiting soil carbon and nitrogen sampling: quantitative pits versus rotary cores
USDA-ARS?s Scientific Manuscript database
Increasing atmospheric carbon dioxide and its feedbacks with global climate have sparked renewed interest in quantifying ecosystem carbon (C) budgets, including quantifying belowground pools. Belowground nutrient budgets require accurate estimates of soil mass, coarse fragment content, and nutrient ...
Reconstructing Dynamic Promoter Activity Profiles from Reporter Gene Data.
Kannan, Soumya; Sams, Thomas; Maury, Jérôme; Workman, Christopher T
2018-03-16
Accurate characterization of promoter activity is important when designing expression systems for systems biology and metabolic engineering applications. Promoters that respond to changes in the environment enable the dynamic control of gene expression without the necessity of inducer compounds, for example. However, the dynamic nature of these processes poses challenges for estimating promoter activity. Most experimental approaches utilize reporter gene expression to estimate promoter activity. Typically the reporter gene encodes a fluorescent protein that is used to infer a constant promoter activity despite the fact that the observed output may be dynamic and is a number of steps away from the transcription process. In fact, some promoters that are often thought of as constitutive can show changes in activity when growth conditions change. For these reasons, we have developed a system of ordinary differential equations for estimating dynamic promoter activity for promoters that change their activity in response to the environment that is robust to noise and changes in growth rate. Our approach, inference of dynamic promoter activity (PromAct), improves on existing methods by more accurately inferring known promoter activity profiles. This method is also capable of estimating the correct scale of promoter activity and can be applied to quantitative data sets to estimate quantitative rates.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Measurement of lung expansion with computed tomography and comparison with quantitative histology.
Coxson, H O; Mayo, J R; Behzad, H; Moore, B J; Verburgt, L M; Staples, C A; Paré, P D; Hogg, J C
1995-11-01
The total and regional lung volumes were estimated from computed tomography (CT), and the pleural pressure gradient was determined by using the milliliters of gas per gram of tissue estimated from the X-ray attenuation values and the pressure-volume curve of the lung. The data show that CT accurately estimated the volume of the resected lobe but overestimated its weight by 24 +/- 19%. The volume of gas per gram of tissue was less in the gravity-dependent regions due to a pleural pressure gradient of 0.24 +/- 0.08 cmH2O/cm of descent in the thorax. The proportion of tissue to air obtained with CT was similar to that obtained by quantitative histology. We conclude that the CT scan can be used to estimate total and regional lung volumes and that measurements of the proportions of tissue and air within the thorax by CT can be used in conjunction with quantitative histology to evaluate lung structure.
A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades
Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd
2017-01-01
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter. PMID:28813566
A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades.
Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd
2017-08-01
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.
NASA Astrophysics Data System (ADS)
Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.
2011-11-01
High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.
Winzer, Eva; Luger, Maria; Schindler, Karin
2018-06-01
Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2015-03-01
Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.
Automated selected reaction monitoring software for accurate label-free protein quantification.
Teleman, Johan; Karlsson, Christofer; Waldemarson, Sofia; Hansson, Karin; James, Peter; Malmström, Johan; Levander, Fredrik
2012-07-06
Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently removes interfering signals from contaminating peptides to estimate the true signal of the targeted peptides. We evaluated the algorithm on a published multisite data set and achieved results in line with manual data analysis. In complex peptide mixtures from whole proteome digests of Streptococcus pyogenes we achieved a technical variability across the entire proteome abundance range of 6.5-19.2%, which was considerably below the total variation across biological samples. Our results show that the label-free SRM workflow with automated data analysis is feasible for large-scale biological studies, opening up new possibilities for quantitative proteomics and systems biology.
Automated comprehensive Adolescent Idiopathic Scoliosis assessment using MVC-Net.
Wu, Hongbo; Bailey, Chris; Rasoulinejad, Parham; Li, Shuo
2018-05-18
Automated quantitative estimation of spinal curvature is an important task for the ongoing evaluation and treatment planning of Adolescent Idiopathic Scoliosis (AIS). It solves the widely accepted disadvantage of manual Cobb angle measurement (time-consuming and unreliable) which is currently the gold standard for AIS assessment. Attempts have been made to improve the reliability of automated Cobb angle estimation. However, it is very challenging to achieve accurate and robust estimation of Cobb angles due to the need for correctly identifying all the required vertebrae in both Anterior-posterior (AP) and Lateral (LAT) view x-rays. The challenge is especially evident in LAT x-ray where occlusion of vertebrae by the ribcage occurs. We therefore propose a novel Multi-View Correlation Network (MVC-Net) architecture that can provide a fully automated end-to-end framework for spinal curvature estimation in multi-view (both AP and LAT) x-rays. The proposed MVC-Net uses our newly designed multi-view convolution layers to incorporate joint features of multi-view x-rays, which allows the network to mitigate the occlusion problem by utilizing the structural dependencies of the two views. The MVC-Net consists of three closely-linked components: (1) a series of X-modules for joint representation of spinal structure (2) a Spinal Landmark Estimator network for robust spinal landmark estimation, and (3) a Cobb Angle Estimator network for accurate Cobb Angles estimation. By utilizing an iterative multi-task training algorithm to train the Spinal Landmark Estimator and Cobb Angle Estimator in tandem, the MVC-Net leverages the multi-task relationship between landmark and angle estimation to reliably detect all the required vertebrae for accurate Cobb angles estimation. Experimental results on 526 x-ray images from 154 patients show an impressive 4.04° Circular Mean Absolute Error (CMAE) in AP Cobb angle and 4.07° CMAE in LAT Cobb angle estimation, which demonstrates the MVC-Net's capability of robust and accurate estimation of Cobb angles in multi-view x-rays. Our method therefore provides clinicians with a framework for efficient, accurate, and reliable estimation of spinal curvature for comprehensive AIS assessment. Copyright © 2018. Published by Elsevier B.V.
USDA-ARS?s Scientific Manuscript database
N-nitroso compounds are recognized as important dietary carcinogens. Accurate assessment of N-nitroso intake is fundamental to advancing research regarding its role with cancer. Previous studies have not used a quantitative database to estimate the intake of these compounds in a US population. To ad...
NASA Astrophysics Data System (ADS)
Abou-Khousa, M. A.; Zoughi, R.
2007-03-01
Non-invasive monitoring of dielectric slab thickness is of great interest in various industrial applications. This paper focuses on estimating the thickness of dielectric slabs, and consequently monitoring their variations, utilizing wideband microwave signals and the MUtiple SIgnal Characterization (MUSIC) algorithm. The performance of the proposed approach is assessed by validating simulation results with laboratory experiments. The results clearly indicate the utility of this overall approach for accurate dielectric slab thickness evaluation.
Pulsar distances and the galactic distribution of free electrons
NASA Technical Reports Server (NTRS)
Taylor, J. H.; Cordes, J. M.
1993-01-01
The present quantitative model for Galactic free electron distribution abandons the assumption of axisymmetry and explicitly incorporates spiral arms; their shapes and locations are derived from existing radio and optical observations of H II regions. The Gum Nebula's dispersion-measure contributions are also explicitly modeled. Adjustable quantities are calibrated by reference to three different types of data. The new model is estimated to furnish distance estimates to known pulsars that are accurate to about 25 percent.
Koloušková, Pavla; Stone, James D.
2017-01-01
Accurate gene expression measurements are essential in studies of both crop and wild plants. Reverse transcription quantitative real-time PCR (RT-qPCR) has become a preferred tool for gene expression estimation. A selection of suitable reference genes for the normalization of transcript levels is an essential prerequisite of accurate RT-qPCR results. We evaluated the expression stability of eight candidate reference genes across roots, leaves, flower buds and pollen of Silene vulgaris (bladder campion), a model plant for the study of gynodioecy. As random priming of cDNA is recommended for the study of organellar transcripts and poly(A) selection is indicated for nuclear transcripts, we estimated gene expression with both random-primed and oligo(dT)-primed cDNA. Accordingly, we determined reference genes that perform well with oligo(dT)- and random-primed cDNA, making it possible to estimate levels of nucleus-derived transcripts in the same cDNA samples as used for organellar transcripts, a key benefit in studies of cyto-nuclear interactions. Gene expression variance was estimated by RefFinder, which integrates four different analytical tools. The SvACT and SvGAPDH genes were the most stable candidates across various organs of S. vulgaris, regardless of whether pollen was included or not. PMID:28817728
Robust estimation of adaptive tensors of curvature by tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung
2005-03-01
Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.
How large is the typical subarachnoid hemorrhage? A review of current neurosurgical knowledge.
Whitmore, Robert G; Grant, Ryan A; LeRoux, Peter; El-Falaki, Omar; Stein, Sherman C
2012-01-01
Despite the morbidity and mortality of subarachnoid hemorrhage (SAH), the average volume of a typical hemorrhage is not well defined. Animal models of SAH often do not accurately mimic the human disease process. The purpose of this study is to estimate the average SAH volume, allowing standardization of animal models of the disease. We performed a MEDLINE search of SAH volume and erythrocyte counts in human cerebrospinal fluid as well as for volumes of blood used in animal injection models of SAH, from 1956 to 2010. We polled members of the American Association of Neurological Surgeons (AANS) for estimates of typical SAH volume. Using quantitative data from the literature, we calculated the total volume of SAH as equal to the volume of blood clotted in basal cisterns plus the volume of dispersed blood in cerebrospinal fluid. The results of the AANS poll confirmed our estimates. The human literature yielded 322 publications and animal literature, 237 studies. Four quantitative human studies reported blood clot volumes ranging from 0.2 to 170 mL, with a mean of ∼20 mL. There was only one quantitative study reporting cerebrospinal fluid red blood cell counts from serial lumbar puncture after SAH. Dispersed blood volume ranged from 2.9 to 45.9 mL, and we used the mean of 15 mL for our calculation. Therefore, total volume of SAH equals 35 mL. The AANS poll yielded 176 responses, ranging from 2 to 350 mL, with a mean of 33.9 ± 4.4 mL. Based on our estimate of total SAH volume of 35 mL, animal injection models may now become standardized for more accurate portrayal of the human disease process. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
The linearized multistage model and the future of quantitative risk assessment.
Crump, K S
1996-10-01
The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.
Improved Correction of Misclassification Bias With Bootstrap Imputation.
van Walraven, Carl
2018-07-01
Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.
Computation of mass-density images from x-ray refraction-angle images.
Wernick, Miles N; Yang, Yongyi; Mondal, Indrasis; Chapman, Dean; Hasnah, Moumen; Parham, Christopher; Pisano, Etta; Zhong, Zhong
2006-04-07
In this paper, we investigate the possibility of computing quantitatively accurate images of mass density variations in soft tissue. This is a challenging task, because density variations in soft tissue, such as the breast, can be very subtle. Beginning from an image of refraction angle created by either diffraction-enhanced imaging (DEI) or multiple-image radiography (MIR), we estimate the mass-density image using a constrained least squares (CLS) method. The CLS algorithm yields accurate density estimates while effectively suppressing noise. Our method improves on an analytical method proposed by Hasnah et al (2005 Med. Phys. 32 549-52), which can produce significant artefacts when even a modest level of noise is present. We present a quantitative evaluation study to determine the accuracy with which mass density can be determined in the presence of noise. Based on computer simulations, we find that the mass-density estimation error can be as low as a few per cent for typical density variations found in the breast. Example images computed from less-noisy real data are also shown to illustrate the feasibility of the technique. We anticipate that density imaging may have application in assessment of water content of cartilage resulting from osteoarthritis, in evaluation of bone density, and in mammographic interpretation.
Lee, Vinson R.; Blew, Rob M.; Farr, Josh N.; Tomas, Rita; Lohman, Timothy G.; Going, Scott B.
2013-01-01
Objective Assess the utility of peripheral quantitative computed tomography (pQCT) for estimating whole body fat in adolescent girls. Research Methods and Procedures Our sample included 458 girls (aged 10.7 ± 1.1y, mean BMI = 18.5 ± 3.3 kg/m2) who had DXA scans for whole body percent fat (DXA %Fat). Soft tissue analysis of pQCT scans provided thigh and calf subcutaneous percent fat and thigh and calf muscle density (muscle fat content surrogates). Anthropometric variables included weight, height and BMI. Indices of maturity included age and maturity offset. The total sample was split into validation (VS; n = 304) and cross-validation (CS; n = 154) samples. Linear regression was used to develop prediction equations for estimating DXA %Fat from anthropometric variables and pQCT-derived soft tissue components in VS and the best prediction equation was applied to CS. Results Thigh and calf SFA %Fat were positively correlated with DXA %Fat (r = 0.84 to 0.85; p <0.001) and thigh and calf muscle densities were inversely related to DXA %Fat (r = −0.30 to −0.44; p < 0.001). The best equation for estimating %Fat included thigh and calf SFA %Fat and thigh and calf muscle density (adj. R2 = 0.90; SEE = 2.7%). Bland-Altman analysis in CS showed accurate estimates of percent fat (adj. R2 = 0.89; SEE = 2.7%) with no bias. Discussion Peripheral QCT derived indices of adiposity can be used to accurately estimate whole body percent fat in adolescent girls. PMID:25147482
Under-sampling trajectory design for compressed sensing based DCE-MRI.
Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting
2013-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Reducing misfocus-related motion artefacts in laser speckle contrast imaging.
Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer
2015-01-01
Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. Copyright © 2016 Elsevier Inc. All rights reserved.
Validation of Bayesian analysis of compartmental kinetic models in medical imaging.
Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M
2016-10-01
Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D
2012-01-01
Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084
A quantitative test of population genetics using spatiogenetic patterns in bacterial colonies.
Korolev, Kirill S; Xavier, João B; Nelson, David R; Foster, Kevin R
2011-10-01
It is widely accepted that population-genetics theory is the cornerstone of evolutionary analyses. Empirical tests of the theory, however, are challenging because of the complex relationships between space, dispersal, and evolution. Critically, we lack quantitative validation of the spatial models of population genetics. Here we combine analytics, on- and off-lattice simulations, and experiments with bacteria to perform quantitative tests of the theory. We study two bacterial species, the gut microbe Escherichia coli and the opportunistic pathogen Pseudomonas aeruginosa, and show that spatiogenetic patterns in colony biofilms of both species are accurately described by an extension of the one-dimensional stepping-stone model. We use one empirical measure, genetic diversity at the colony periphery, to parameterize our models and show that we can then accurately predict another key variable: the degree of short-range cell migration along an edge. Moreover, the model allows us to estimate other key parameters, including effective population size (density) at the expansion frontier. While our experimental system is a simplification of natural microbial community, we argue that it constitutes proof of principle that the spatial models of population genetics can quantitatively capture organismal evolution.
End-to-end deep neural network for optical inversion in quantitative photoacoustic imaging.
Cai, Chuangjian; Deng, Kexin; Ma, Cheng; Luo, Jianwen
2018-06-15
An end-to-end deep neural network, ResU-net, is developed for quantitative photoacoustic imaging. A residual learning framework is used to facilitate optimization and to gain better accuracy from considerably increased network depth. The contracting and expanding paths enable ResU-net to extract comprehensive context information from multispectral initial pressure images and, subsequently, to infer a quantitative image of chromophore concentration or oxygen saturation (sO 2 ). According to our numerical experiments, the estimations of sO 2 and indocyanine green concentration are accurate and robust against variations in both optical property and object geometry. An extremely short reconstruction time of 22 ms is achieved.
Contact inspection of Si nanowire with SEM voltage contrast
NASA Astrophysics Data System (ADS)
Ohashi, Takeyoshi; Yamaguchi, Atsuko; Hasumi, Kazuhisa; Ikota, Masami; Lorusso, Gian; Horiguchi, Naoto
2018-03-01
A methodology to evaluate the electrical contact between nanowire (NW) and source/drain (SD) in NW FETs was investigated with SEM voltage contrast (VC). The electrical defects were robustly detected by VC. The validity of the inspection result was verified by TEM physical observations. Moreover, estimation of the parasitic resistance and capacitance was achieved from the quantitative analysis of VC images which were acquired with different scan conditions of electron beam (EB). A model considering the dynamics of EB-induce charging was proposed to calculate the VC. The resistance and capacitance can be determined by comparing the model-based VC with experimentally obtained VC. Quantitative estimation of resistance and capacitance would be valuable not only for more accurate inspection, but also for identification of the defect point.
Murdande, Sharad B; Pikal, Michael J; Shanker, Ravi M; Bogner, Robin H
2010-12-01
To quantitatively assess the solubility advantage of amorphous forms of nine insoluble drugs with a wide range of physico-chemical properties utilizing a previously reported thermodynamic approach. Thermal properties of amorphous and crystalline forms of drugs were measured using modulated differential calorimetry. Equilibrium moisture sorption uptake by amorphous drugs was measured by a gravimetric moisture sorption analyzer, and ionization constants were determined from the pH-solubility profiles. Solubilities of crystalline and amorphous forms of drugs were measured in de-ionized water at 25°C. Polarized microscopy was used to provide qualitative information about the crystallization of amorphous drug in solution during solubility measurement. For three out the nine compounds, the estimated solubility based on thermodynamic considerations was within two-fold of the experimental measurement. For one compound, estimated solubility enhancement was lower than experimental value, likely due to extensive ionization in solution and hence its sensitivity to error in pKa measurement. For the remaining five compounds, estimated solubility was about 4- to 53-fold higher than experimental results. In all cases where the theoretical solubility estimates were significantly higher, it was observed that the amorphous drug crystallized rapidly during the experimental determination of solubility, thus preventing an accurate experimental assessment of solubility advantage. It has been demonstrated that the theoretical approach does provide an accurate estimate of the maximum solubility enhancement by an amorphous drug relative to its crystalline form for structurally diverse insoluble drugs when recrystallization during dissolution is minimal.
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-01-01
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-04-07
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.
Mark B. Green; John L. Campbell; Ruth D. Yanai; Scott W. Bailey; Amey S. Bailey; Nicholas Grant; Ian Halm; Eric P. Kelsey; Lindsey E. Rustad
2018-01-01
The design of a precipitation monitoring network must balance the demand for accurate estimates with the resources needed to build and maintain the network. If there are changes in the objectives of the monitoring or the availability of resources, network designs should be adjusted. At the Hubbard Brook Experimental Forest in New Hampshire, USA, precipitation has been...
Analysis of ribosomal RNA stability in dead cells of wine yeast by quantitative PCR.
Sunyer-Figueres, Merce; Wang, Chunxiao; Mas, Albert
2018-04-02
During wine production, some yeasts enter a Viable But Not Culturable (VBNC) state, which may influence the quality and stability of the final wine through remnant metabolic activity or by resuscitation. Culture-independent techniques are used for obtaining an accurate estimation of the number of live cells, and quantitative PCR could be the most accurate technique. As a marker of cell viability, rRNA was evaluated by analyzing its stability in dead cells. The species-specific stability of rRNA was tested in Saccharomyces cerevisiae, as well as in three species of non-Saccharomyces yeast (Hanseniaspora uvarum, Torulaspora delbrueckii and Starmerella bacillaris). High temperature and antimicrobial dimethyl dicarbonate (DMDC) treatments were efficient in lysing the yeast cells. rRNA gene and rRNA (as cDNA) were analyzed over 48 h after cell lysis by quantitative PCR. The results confirmed the stability of rRNA for 48 h after the cell lysis treatments. To sum up, rRNA may not be a good marker of cell viability in the wine yeasts that were tested. Copyright © 2018 Elsevier B.V. All rights reserved.
Simultaneous Estimation of Withaferin A and Z-Guggulsterone in Marketed Formulation by RP-HPLC.
Agrawal, Poonam; Vegda, Rashmi; Laddha, Kirti
2015-07-01
A simple, rapid, precise and accurate high-performance liquid chromatography (HPLC) method was developed for simultaneous estimation of withaferin A and Z-guggulsterone in a polyherbal formulation containing Withania somnifera and Commiphora wightii. The chromatographic separation was achieved on a Purosphere RP-18 column (particle size 5 µm) with a mobile phase consisting of Solvent A (acetonitrile) and Solvent B (water) with the following gradients: 0-7 min, 50% A in B; 7-9 min, 50-80% A in B; 9-20 min, 80% A in B at a flow rate of 1 mL/min and detection at 235 nm. The marker compounds were well separated on the chromatogram within 20 min. The results obtained indicate accuracy and reliability of the developed simultaneous HPLC method for the quantification of withaferin A and Z-guggulsterone. The proposed method was found to be reproducible, specific, precise and accurate for simultaneous estimation of these marker compounds in a combined dosage form. The HPLC method was appropriate and the two markers are well resolved, enabling efficient quantitative analysis of withaferin A and Z-guggulsterone. The method can be successively used for quantitative analysis of these two marker constituents in combination of marketed polyherbal formulation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Opportunities to Intercalibrate Radiometric Sensors From International Space Station
NASA Technical Reports Server (NTRS)
Roithmayr, C. M.; Lukashin, C.; Speth, P. W.; Thome, K. J.; Young, D. F.; Wielicki, B. A.
2012-01-01
Highly accurate measurements of Earth's thermal infrared and reflected solar radiation are required for detecting and predicting long-term climate change. We consider the concept of using the International Space Station to test instruments and techniques that would eventually be used on a dedicated mission such as the Climate Absolute Radiance and Refractivity Observatory. In particular, a quantitative investigation is performed to determine whether it is possible to use measurements obtained with a highly accurate reflected solar radiation spectrometer to calibrate similar, less accurate instruments in other low Earth orbits. Estimates of numbers of samples useful for intercalibration are made with the aid of year-long simulations of orbital motion. We conclude that the International Space Station orbit is ideally suited for the purpose of intercalibration.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.
Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi
2018-05-28
Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.
Spacecraft Complexity Subfactors and Implications on Future Cost Growth
NASA Technical Reports Server (NTRS)
Leising, Charles J.; Wessen, Randii; Ellyin, Ray; Rosenberg, Leigh; Leising, Adam
2013-01-01
During the last ten years the Jet Propulsion Laboratory has used a set of cost-risk subfactors to independently estimate the magnitude of development risks that may not be covered in the high level cost models employed during early concept development. Within the last several years the Laboratory has also developed a scale of Concept Maturity Levels with associated criteria to quantitatively assess a concept's maturity. This latter effort has been helpful in determining whether a concept is mature enough for accurate costing but it does not provide any quantitative estimate of cost risk. Unfortunately today's missions are significantly more complex than when the original cost-risk subfactors were first formulated. Risks associated with complex missions are not being adequately evaluated and future cost growth is being underestimated. The risk subfactor process needed to be updated.
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Nolte, Tom M; Ragas, Ad M J
2017-03-22
Many organic chemicals are ionizable by nature. After use and release into the environment, various fate processes determine their concentrations, and hence exposure to aquatic organisms. In the absence of suitable data, such fate processes can be estimated using Quantitative Structure-Property Relationships (QSPRs). In this review we compiled available QSPRs from the open literature and assessed their applicability towards ionizable organic chemicals. Using quantitative and qualitative criteria we selected the 'best' QSPRs for sorption, (a)biotic degradation, and bioconcentration. The results indicate that many suitable QSPRs exist, but some critical knowledge gaps remain. Specifically, future focus should be directed towards the development of QSPR models for biodegradation in wastewater and sediment systems, direct photolysis and reaction with singlet oxygen, as well as additional reactive intermediates. Adequate QSPRs for bioconcentration in fish exist, but more accurate assessments can be achieved using pharmacologically based toxicokinetic (PBTK) models. No adequate QSPRs exist for bioconcentration in non-fish species. Due to the high variability of chemical and biological species as well as environmental conditions in QSPR datasets, accurate predictions for specific systems and inter-dataset conversions are problematic, for which standardization is needed. For all QSPR endpoints, additional data requirements involve supplementing the current chemical space covered and accurately characterizing the test systems used.
Tichauer, Kenneth M.; Wang, Yu; Pogue, Brian W.; Liu, Jonathan T. C.
2015-01-01
The development of methods to accurately quantify cell-surface receptors in living tissues would have a seminal impact in oncology. For example, accurate measures of receptor density in vivo could enhance early detection or surgical resection of tumors via protein-based contrast, allowing removal of cancer with high phenotype specificity. Alternatively, accurate receptor expression estimation could be used as a biomarker to guide patient-specific clinical oncology targeting of the same molecular pathway. Unfortunately, conventional molecular contrast-based imaging approaches are not well adapted to accurately estimating the nanomolar-level cell-surface receptor concentrations in tumors, as most images are dominated by nonspecific sources of contrast such as high vascular permeability and lymphatic inhibition. This article reviews approaches for overcoming these limitations based upon tracer kinetic modeling and the use of emerging protocols to estimate binding potential and the related receptor concentration. Methods such as using single time point imaging or a reference-tissue approach tend to have low accuracy in tumors, whereas paired-agent methods or advanced kinetic analyses are more promising to eliminate the dominance of interstitial space in the signals. Nuclear medicine and optical molecular imaging are the primary modalities used, as they have the nanomolar level sensitivity needed to quantify cell-surface receptor concentrations present in tissue, although each likely has a different clinical niche. PMID:26134619
Li, Chunhui; Guan, Guangying; Zhang, Fan; Song, Shaozhen; Wang, Ruikang K; Huang, Zhihong; Nabi, Ghulam
2014-12-01
The maintenance of urinary bladder elasticity is essential to its functions, including the storage and voiding phases of the micturition cycle. The bladder stiffness can be changed by various pathophysiological conditions. Quantitative measurement of bladder elasticity is an essential step toward understanding various urinary bladder disease processes and improving patient care. As a nondestructive, and noncontact method, laser-induced surface acoustic waves (SAWs) can accurately characterize the elastic properties of different layers of organs such as the urinary bladder. This initial investigation evaluates the feasibility of a noncontact, all-optical method of generating and measuring the elasticity of the urinary bladder. Quantitative elasticity measurements of ex vivo porcine urinary bladder were made using the laser-induced SAW technique. A pulsed laser was used to excite SAWs that propagated on the bladder wall surface. A dedicated phase-sensitive optical coherence tomography (PhS-OCT) system remotely recorded the SAWs, from which the elasticity properties of different layers of the bladder were estimated. During the experiments, series of measurements were performed under five precisely controlled bladder volumes using water to estimate changes in the elasticity in relation to various urinary bladder contents. The results, validated by optical coherence elastography, show that the laser-induced SAW technique combined with PhS-OCT can be a feasible method of quantitative estimation of biomechanical properties.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris
2016-04-01
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.
Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K
2018-04-21
Many clinical applications depend critically on the accurate differentiation and classification of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT (DECT). We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols. The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configuration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images examining realistic configurations for both DECT and TECT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 and 1 mg/mL, respectively. TECT outperforms DECT for multicontrast CT imaging and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic configurations. A unified framework for both DECT and TECT imaging has been established for the accurate extraction of material compositions using currently available commercial DECT configurations. The novel technique is promising to provide an urgently needed solution for several CT-based diagnostic and therapy applications, especially for the diagnosis of cardiovascular and abdominal diseases where multicontrast imaging is involved. © 2018 American Association of Physicists in Medicine.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Genetic interactions contribute less than additive effects to quantitative trait variation in yeast
Bloom, Joshua S.; Kotenko, Iulia; Sadhu, Meru J.; Treusch, Sebastian; Albert, Frank W.; Kruglyak, Leonid
2015-01-01
Genetic mapping studies of quantitative traits typically focus on detecting loci that contribute additively to trait variation. Genetic interactions are often proposed as a contributing factor to trait variation, but the relative contribution of interactions to trait variation is a subject of debate. Here we use a very large cross between two yeast strains to accurately estimate the fraction of phenotypic variance due to pairwise QTL–QTL interactions for 20 quantitative traits. We find that this fraction is 9% on average, substantially less than the contribution of additive QTL (43%). Statistically significant QTL–QTL pairs typically have small individual effect sizes, but collectively explain 40% of the pairwise interaction variance. We show that pairwise interaction variance is largely explained by pairs of loci at least one of which has a significant additive effect. These results refine our understanding of the genetic architecture of quantitative traits and help guide future mapping studies. PMID:26537231
Quantitative, spectrally-resolved intraoperative fluorescence imaging
Valdés, Pablo A.; Leblond, Frederic; Jacobs, Valerie L.; Wilson, Brian C.; Paulsen, Keith D.; Roberts, David W.
2012-01-01
Intraoperative visual fluorescence imaging (vFI) has emerged as a promising aid to surgical guidance, but does not fully exploit the potential of the fluorescent agents that are currently available. Here, we introduce a quantitative fluorescence imaging (qFI) approach that converts spectrally-resolved data into images of absolute fluorophore concentration pixel-by-pixel across the surgical field of view (FOV). The resulting estimates are linear, accurate, and precise relative to true values, and spectral decomposition of multiple fluorophores is also achieved. Experiments with protoporphyrin IX in a glioma rodent model demonstrate in vivo quantitative and spectrally-resolved fluorescence imaging of infiltrating tumor margins for the first time. Moreover, we present images from human surgery which detect residual tumor not evident with state-of-the-art vFI. The wide-field qFI technique has broad implications for intraoperative surgical guidance because it provides near real-time quantitative assessment of multiple fluorescent biomarkers across the operative field. PMID:23152935
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Lin, Huan-Ting; Okumura, Takashi; Yatsuda, Yukinori; Ito, Satoru; Nakauchi, Hiromitsu; Otsu, Makoto
2016-10-01
Stable gene transfer into target cell populations via integrating viral vectors is widely used in stem cell gene therapy (SCGT). Accurate vector copy number (VCN) estimation has become increasingly important. However, existing methods of estimation such as real-time quantitative PCR are more restricted in practicality, especially during clinical trials, given the limited availability of sample materials from patients. This study demonstrates the application of an emerging technology called droplet digital PCR (ddPCR) in estimating VCN states in the context of SCGT. Induced pluripotent stem cells (iPSCs) derived from a patient with X-linked chronic granulomatous disease were used as clonable target cells for transduction with alpharetroviral vectors harboring codon-optimized CYBB cDNA. Precise primer-probe design followed by multiplex analysis conferred assay specificity. Accurate estimation of per-cell VCN values was possible without reliance on a reference standard curve. Sensitivity was high and the dynamic range of detection was wide. Assay reliability was validated by observation of consistent, reproducible, and distinct VCN clustering patterns for clones of transduced iPSCs with varying numbers of transgene copies. Taken together, use of ddPCR appears to offer a practical and robust approach to VCN estimation with a wide range of clinical and research applications.
Lin, Huan-Ting; Okumura, Takashi; Yatsuda, Yukinori; Ito, Satoru; Nakauchi, Hiromitsu; Otsu, Makoto
2016-01-01
Stable gene transfer into target cell populations via integrating viral vectors is widely used in stem cell gene therapy (SCGT). Accurate vector copy number (VCN) estimation has become increasingly important. However, existing methods of estimation such as real-time quantitative PCR are more restricted in practicality, especially during clinical trials, given the limited availability of sample materials from patients. This study demonstrates the application of an emerging technology called droplet digital PCR (ddPCR) in estimating VCN states in the context of SCGT. Induced pluripotent stem cells (iPSCs) derived from a patient with X-linked chronic granulomatous disease were used as clonable target cells for transduction with alpharetroviral vectors harboring codon-optimized CYBB cDNA. Precise primer–probe design followed by multiplex analysis conferred assay specificity. Accurate estimation of per-cell VCN values was possible without reliance on a reference standard curve. Sensitivity was high and the dynamic range of detection was wide. Assay reliability was validated by observation of consistent, reproducible, and distinct VCN clustering patterns for clones of transduced iPSCs with varying numbers of transgene copies. Taken together, use of ddPCR appears to offer a practical and robust approach to VCN estimation with a wide range of clinical and research applications. PMID:27763786
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.
Gyawali, P; Sidhu, J P S; Ahmed, W; Jagals, P; Toze, S
2017-06-01
Accurate quantitative measurement of viable hookworm ova from environmental samples is the key to controlling hookworm re-infections in the endemic regions. In this study, the accuracy of three quantitative detection methods [culture-based, vital stain and propidium monoazide-quantitative polymerase chain reaction (PMA-qPCR)] was evaluated by enumerating 1,000 ± 50 Ancylostoma caninum ova in the laboratory. The culture-based method was able to quantify an average of 397 ± 59 viable hookworm ova. Similarly, vital stain and PMA-qPCR methods quantified 644 ± 87 and 587 ± 91 viable ova, respectively. The numbers of viable ova estimated by the culture-based method were significantly (P < 0.05) lower than vital stain and PMA-qPCR methods. Therefore, both PMA-qPCR and vital stain methods appear to be suitable for the quantitative detection of viable hookworm ova. However, PMA-qPCR would be preferable over the vital stain method in scenarios where ova speciation is needed.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Yang, Fanlin; Zhang, Hande; Su, Dianpeng; Li, QianQian
2017-06-01
The correlation between seafloor morphological features and biological complexity has been identified in numerous recent studies. This research focused on the potential for accurate characterization of coral reefs based on high-resolution bathymetry from multiple sources. A standard deviation (STD) based method for quantitatively characterizing terrain complexity was developed that includes robust estimation to correct for irregular bathymetry and a calibration for the depth-dependent variablity of measurement noise. Airborne lidar and shipborne sonar bathymetry measurements from Yuanzhi Island, South China Sea, were merged to generate seamless high-resolution coverage of coral bathymetry from the shoreline to deep water. The new algorithm was applied to the Yuanzhi Island surveys to generate maps of quantitive terrain complexity, which were then compared to in situ video observations of coral abundance. The terrain complexity parameter is significantly correlated with seafloor coral abundance, demonstrating the potential for accurately and efficiently mapping coral abundance through seafloor surveys, including combinations of surveys using different sensors.
Elschot, Mattijs; Vermolen, Bart J.; Lam, Marnix G. E. H.; de Keizer, Bart; van den Bosch, Maurice A. A. J.; de Jong, Hugo W. A. M.
2013-01-01
Background After yttrium-90 (90Y) microsphere radioembolization (RE), evaluation of extrahepatic activity and liver dosimetry is typically performed on 90Y Bremsstrahlung SPECT images. Since these images demonstrate a low quantitative accuracy, 90Y PET has been suggested as an alternative. The aim of this study is to quantitatively compare SPECT and state-of-the-art PET on the ability to detect small accumulations of 90Y and on the accuracy of liver dosimetry. Methodology/Principal Findings SPECT/CT and PET/CT phantom data were acquired using several acquisition and reconstruction protocols, including resolution recovery and Time-Of-Flight (TOF) PET. Image contrast and noise were compared using a torso-shaped phantom containing six hot spheres of various sizes. The ability to detect extra- and intrahepatic accumulations of activity was tested by quantitative evaluation of the visibility and unique detectability of the phantom hot spheres. Image-based dose estimates of the phantom were compared to the true dose. For clinical illustration, the SPECT and PET-based estimated liver dose distributions of five RE patients were compared. At equal noise level, PET showed higher contrast recovery coefficients than SPECT. The highest contrast recovery coefficients were obtained with TOF PET reconstruction including resolution recovery. All six spheres were consistently visible on SPECT and PET images, but PET was able to uniquely detect smaller spheres than SPECT. TOF PET-based estimates of the dose in the phantom spheres were more accurate than SPECT-based dose estimates, with underestimations ranging from 45% (10-mm sphere) to 11% (37-mm sphere) for PET, and 75% to 58% for SPECT, respectively. The differences between TOF PET and SPECT dose-estimates were supported by the patient data. Conclusions/Significance In this study we quantitatively demonstrated that the image quality of state-of-the-art PET is superior over Bremsstrahlung SPECT for the assessment of the 90Y microsphere distribution after radioembolization. PMID:23405207
Optimally weighted least-squares steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2007-02-01
Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.
Phommasone, Koukeo; Althaus, Thomas; Souvanthong, Phonesavanh; Phakhounthong, Khansoudaphone; Soyvienvong, Laxoy; Malapheth, Phatthaphone; Mayxay, Mayfong; Pavlicek, Rebecca L; Paris, Daniel H; Dance, David; Newton, Paul; Lubell, Yoel
2016-02-04
C-Reactive Protein (CRP) has been shown to be an accurate biomarker for discriminating bacterial from viral infections in febrile patients in Southeast Asia. Here we investigate the accuracy of existing rapid qualitative and semi-quantitative tests as compared with a quantitative reference test to assess their potential for use in remote tropical settings. Blood samples were obtained from consecutive patients recruited to a prospective fever study at three sites in rural Laos. At each site, one of three rapid qualitative or semi-quantitative tests was performed, as well as a corresponding quantitative NycoCard Reader II as a reference test. We estimate the sensitivity and specificity of the three tests against a threshold of 10 mg/L and kappa values for the agreement of the two semi-quantitative tests with the results of the reference test. All three tests showed high sensitivity, specificity and kappa values as compared with the NycoCard Reader II. With a threshold of 10 mg/L the sensitivity of the tests ranged from 87-98 % and the specificity from 91-98 %. The weighted kappa values for the semi-quantitative tests were 0.7 and 0.8. The use of CRP rapid tests could offer an inexpensive and effective approach to improve the targeting of antibiotics in remote settings where health facilities are basic and laboratories are absent. This study demonstrates that accurate CRP rapid tests are commercially available; evaluations of their clinical impact and cost-effectiveness at point of care is warranted.
Wu, J; Awate, S P; Licht, D J; Clouchoux, C; du Plessis, A J; Avants, B B; Vossough, A; Gee, J C; Limperopoulos, C
2015-07-01
Traditional methods of dating a pregnancy based on history or sonographic assessment have a large variation in the third trimester. We aimed to assess the ability of various quantitative measures of brain cortical folding on MR imaging in determining fetal gestational age in the third trimester. We evaluated 8 different quantitative cortical folding measures to predict gestational age in 33 healthy fetuses by using T2-weighted fetal MR imaging. We compared the accuracy of the prediction of gestational age by these cortical folding measures with the accuracy of prediction by brain volume measurement and by a previously reported semiquantitative visual scale of brain maturity. Regression models were constructed, and measurement biases and variances were determined via a cross-validation procedure. The cortical folding measures are accurate in the estimation and prediction of gestational age (mean of the absolute error, 0.43 ± 0.45 weeks) and perform better than (P = .024) brain volume (mean of the absolute error, 0.72 ± 0.61 weeks) or sonography measures (SDs approximately 1.5 weeks, as reported in literature). Prediction accuracy is comparable with that of the semiquantitative visual assessment score (mean, 0.57 ± 0.41 weeks). Quantitative cortical folding measures such as global average curvedness can be an accurate and reliable estimator of gestational age and brain maturity for healthy fetuses in the third trimester and have the potential to be an indicator of brain-growth delays for at-risk fetuses and preterm neonates. © 2015 by American Journal of Neuroradiology.
Henshall, John M; Dierens, Leanne; Sellars, Melony J
2014-09-02
While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.
Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun
2015-10-02
Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.
Linkage disequilibrium interval mapping of quantitative trait loci.
Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte
2006-03-16
For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.
Daniel, Hubert Darius J; Fletcher, John G; Chandy, George M; Abraham, Priya
2009-01-01
Sensitive nucleic acid testing for the detection and accurate quantitation of hepatitis B virus (HBV) is necessary to reduce transmission through blood and blood products and for monitoring patients on antiviral therapy. The aim of this study is to standardize an "in-house" real-time HBV polymerase chain reaction (PCR) for accurate quantitation and screening of HBV. The "in-house" real-time assay was compared with a commercial assay using 30 chronically infected individuals and 70 blood donors who are negative for hepatitis B surface antigen, hepatitis C virus (HCV) antibody and human immunodeficiency virus (HIV) antibody. Further, 30 HBV-genotyped samples were tested to evaluate the "in-house" assay's capacity to detect genotypes prevalent among individuals attending this tertiary care hospital. The lower limit of detection of this "in-house" HBV real-time PCR was assessed against the WHO international standard and found to be 50 IU/mL. The interassay and intra-assay coefficient of variation (CV) of this "in-house" assay ranged from 1.4% to 9.4% and 0.0% to 2.3%, respectively. Virus loads as estimated with this "in-house" HBV real-time assay correlated well with the commercial artus HBV RG PCR assay ( r = 0.95, P < 0.0001). This assay can be used for the detection and accurate quantitation of HBV viral loads in plasma samples. This assay can be employed for the screening of blood donations and can potentially be adapted to a multiplex format for simultaneous detection of HBV, HIV and HCV to reduce the cost of testing in blood banks.
Quantitative Oxygenation Venography from MRI Phase
Fan, Audrey P.; Bilgic, Berkin; Gagnon, Louis; Witzel, Thomas; Bhat, Himanshu; Rosen, Bruce R.; Adalsteinsson, Elfar
2014-01-01
Purpose To demonstrate acquisition and processing methods for quantitative oxygenation venograms that map in vivo oxygen saturation (SvO2) along cerebral venous vasculature. Methods Regularized quantitative susceptibility mapping (QSM) is used to reconstruct susceptibility values and estimate SvO2 in veins. QSM with ℓ1 and ℓ2 regularization are compared in numerical simulations of vessel structures with known magnetic susceptibility. Dual-echo, flow-compensated phase images are collected in three healthy volunteers to create QSM images. Bright veins in the susceptibility maps are vectorized and used to form a three-dimensional vascular mesh, or venogram, along which to display SvO2 values from QSM. Results Quantitative oxygenation venograms that map SvO2 along brain vessels of arbitrary orientation and geometry are shown in vivo. SvO2 values in major cerebral veins lie within the normal physiological range reported by 15O positron emission tomography. SvO2 from QSM is consistent with previous MR susceptometry methods for vessel segments oriented parallel to the main magnetic field. In vessel simulations, ℓ1 regularization results in less than 10% SvO2 absolute error across all vessel tilt orientations and provides more accurate SvO2 estimation than ℓ2 regularization. Conclusion The proposed analysis of susceptibility images enables reliable mapping of quantitative SvO2 along venograms and may facilitate clinical use of venous oxygenation imaging. PMID:24006229
Comparative analysis of quantitative methodologies for Vibrionaceae biofilms.
Chavez-Dozal, Alba A; Nourabadi, Neda; Erken, Martina; McDougald, Diane; Nishiguchi, Michele K
2016-11-01
Multiple symbiotic and free-living Vibrio spp. grow as a form of microbial community known as a biofilm. In the laboratory, methods to quantify Vibrio biofilm mass include crystal violet staining, direct colony-forming unit (CFU) counting, dry biofilm cell mass measurement, and observation of development of wrinkled colonies. Another approach for bacterial biofilms also involves the use of tetrazolium (XTT) assays (used widely in studies of fungi) that are an appropriate measure of metabolic activity and vitality of cells within the biofilm matrix. This study systematically tested five techniques, among which the XTT assay and wrinkled colony measurement provided the most reproducible, accurate, and efficient methods for the quantitative estimation of Vibrionaceae biofilms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn
Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less
[Quantitative relationships between hyper-spectral vegetation indices and leaf area index of rice].
Tian, Yong-Chao; Yang, Jie; Yao, Xia; Zhu, Yan; Cao, Wei-Xing
2009-07-01
Based on field experiments with different rice varieties under different nitrogen application levels, the quantitative relationships of rice leaf area index (LAI) with canopy hyper-spectral parameters at different growth stages were analyzed. Rice LAI had good relationships with several hyper-spectral vegetation indices, the correlation coefficient being the highest with DI (difference index), followed by with RI (ratio index), and NI (normalized index), based on the spectral reflectance or the first derivative spectra. The two best spectral indices for estimating LAI were the difference index DI (854, 760) (based on two spectral bands of 850 nm and 760 nm) and the difference index DI (D676, D778) (based on two first derivative bands of 676 nm and 778 nm). In general, the hyper-spectral vegetation indices based on spectral reflectance performed better than the spectral indices based on the first derivative spectra. The tests with independent dataset suggested that the rice LAI monitoring models with difference index DI (854,760) as the variable could give an accurate LAI estimation, being available for estimation of rice LAI.
Simon, Aaron B.; Griffeth, Valerie E. M.; Wong, Eric C.; Buxton, Richard B.
2013-01-01
Simultaneous implementation of magnetic resonance imaging methods for Arterial Spin Labeling (ASL) and Blood Oxygenation Level Dependent (BOLD) imaging makes it possible to quantitatively measure the changes in cerebral blood flow (CBF) and cerebral oxygen metabolism (CMRO2) that occur in response to neural stimuli. To date, however, the range of neural stimuli amenable to quantitative analysis is limited to those that may be presented in a simple block or event related design such that measurements may be repeated and averaged to improve precision. Here we examined the feasibility of using the relationship between cerebral blood flow and the BOLD signal to improve dynamic estimates of blood flow fluctuations as well as to estimate metabolic-hemodynamic coupling under conditions where a stimulus pattern is unknown. We found that by combining the information contained in simultaneously acquired BOLD and ASL signals through a method we term BOLD Constrained Perfusion (BCP) estimation, we could significantly improve the precision of our estimates of the hemodynamic response to a visual stimulus and, under the conditions of a calibrated BOLD experiment, accurately determine the ratio of the oxygen metabolic response to the hemodynamic response. Importantly we were able to accomplish this without utilizing a priori knowledge of the temporal nature of the neural stimulus, suggesting that BOLD Constrained Perfusion estimation may make it feasible to quantitatively study the cerebral metabolic and hemodynamic responses to more natural stimuli that cannot be easily repeated or averaged. PMID:23382977
Hyponatremia in liver cirrhosis: pathophysiological principles of management.
Castello, L; Pirisi, M; Sainaghi, P P; Bartoli, E
2005-02-01
Hyponatremia is common in cirrhosis, where it impairs encephalopathy. It could be either due to excess water, or reduced Na, or a combination of both. The diagnosis can be established with clinical skills aided by simple data like weight, blood pressure and plasma electrolytes. The quantitative estimates of the water surfeit or solute deficit, easily performed with simple formulas and measurements, guide accurate and programmed treatment procedures, avoiding the occurrence of the ominous central pontine myelinolysis.
Martin, Daniel E; Severns, Anne E; Kabo, J M J Michael
2004-08-01
Mechanical tests of bone provide valuable information about material and structural properties important for understanding bone pathology in both clinical and research settings, but no previous studies have produced applicable non-invasive, quantitative estimates of bending stiffness. The goal of this study was to evaluate the effectiveness of using peripheral quantitative computed tomography (pQCT) data to accurately compute the bending stiffness of bone. Normal rabbit humeri (N=8) were scanned at their mid-diaphyses using pQCT. The average bone mineral densities and the cross-sectional moments of inertia were computed from the pQCT cross-sections. Bending stiffness was determined as a function of the elastic modulus of compact bone (based on the local bone mineral density), cross-sectional moment of inertia, and simulated quasistatic strain rate. The actual bending stiffness of the bones was determined using four-point bending tests. Comparison of the bending stiffness estimated from the pQCT data and the mechanical bending stiffness revealed excellent correlation (R2=0.96). The bending stiffness from the pQCT data was on average 103% of that obtained from the four-point bending tests. The results indicate that pQCT data can be used to accurately determine the bending stiffness of normal bone. Possible applications include temporal quantification of fracture healing and risk management of osteoporosis or other bone pathologies.
Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
2000-01-01
This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.
Estimating malaria transmission from humans to mosquitoes in a noisy landscape
Reiner, Robert C.; Guerra, Carlos; Donnelly, Martin J.; Bousema, Teun; Drakeley, Chris; Smith, David L.
2015-01-01
A basic quantitative understanding of malaria transmission requires measuring the probability a mosquito becomes infected after feeding on a human. Parasite prevalence in mosquitoes is highly age-dependent, and the unknown age-structure of fluctuating mosquito populations impedes estimation. Here, we simulate mosquito infection dynamics, where mosquito recruitment is modelled seasonally with fractional Brownian noise, and we develop methods for estimating mosquito infection rates. We find that noise introduces bias, but the magnitude of the bias depends on the ‘colour' of the noise. Some of these problems can be overcome by increasing the sampling frequency, but estimates of transmission rates (and estimated reductions in transmission) are most accurate and precise if they combine parity, oocyst rates and sporozoite rates. These studies provide a basis for evaluating the adequacy of various entomological sampling procedures for measuring malaria parasite transmission from humans to mosquitoes and for evaluating the direct transmission-blocking effects of a vaccine. PMID:26400195
Estimating Driving Performance Based on EEG Spectrum Analysis
NASA Astrophysics Data System (ADS)
Lin, Chin-Teng; Wu, Ruei-Cheng; Jung, Tzyy-Ping; Liang, Sheng-Fu; Huang, Teng-Yi
2005-12-01
The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG) log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.
Selecting good regions to deblur via relative total variation
NASA Astrophysics Data System (ADS)
Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong
2018-03-01
Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.
Fujiwara, Yasuhiro; Maruyama, Hirotoshi; Toyomaru, Kanako; Nishizaka, Yuri; Fukamatsu, Masahiro
2018-06-01
Magnetic resonance imaging (MRI) is widely used to detect carotid atherosclerotic plaques. Although it is important to evaluate vulnerable carotid plaques containing lipids and intra-plaque hemorrhages (IPHs) using T 1 -weighted images, the image contrast changes depending on the imaging settings. Moreover, to distinguish between a thrombus and a hemorrhage, it is useful to evaluate the iron content of the plaque using both T 1 -weighted and T 2 *-weighted images. Therefore, a quantitative evaluation of carotid atherosclerotic plaques using T 1 and T 2 * values may be necessary for the accurate evaluation of plaque components. The purpose of this study was to determine whether the multi-echo phase-sensitive inversion recovery (mPSIR) sequence can improve T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of an IPH. T 1 and T 2 * values measured using mPSIR were compared to values from conventional methods in phantom and in vivo studies. In the phantom study, the T 1 and T 2 * values estimated using mPSIR were linearly correlated with those of conventional methods. In the in vivo study, mPSIR demonstrated higher T 1 contrast between the IPH phantom and sternocleidomastoid muscle than the conventional method. Moreover, the T 1 and T 2 * values of the blood vessel wall and sternocleidomastoid muscle estimated using mPSIR were correlated with values measured by conventional methods and with values reported previously. The mPSIR sequence improved T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of the neck region. Although further study is required to evaluate the clinical utility, mPSIR may improve carotid atherosclerotic plaque detection and provide detailed information about plaque components.
Low rank magnetic resonance fingerprinting.
Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C
2016-08-01
Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.
NASA Astrophysics Data System (ADS)
Zhong, L.; Ma, Y.; Ma, W.; Zou, M.; Hu, Y.
2016-12-01
Actual evapotranspiration (ETa) is an important component of the water cycle in the Tibetan Plateau. It is controlled by many hydrological and meteorological factors. Therefore, it is of great significance to estimate ETa accurately and continuously. It is also drawing much attention of scientific community to understand land surface parameters and land-atmosphere water exchange processes in small watershed-scale areas. Based on in-situ meteorological data in the Nagqu river basin and surrounding regions, the main meteorological factors affecting the evaporation process were quantitatively analyzed and the point-scale ETa estimation models in the study area were successfully built. On the other hand, multi-source satellite data (such as SPOT, MODIS, FY-2C) were used to derive the surface characteristics in the river basin. A time series processing technique was applied to remove cloud cover and reconstruct data series. Then improved land surface albedo, improved downward shortwave radiation flux and reconstructed normalized difference vegetation index (NDVI) were coupled into the topographical enhanced surface energy balance system to estimate ETa. The model-estimated results were compared with those ETa values determined by combinatory method. The results indicated that the model-estimated ETa agreed well with in-situ measurements with correlation coefficient, mean bias error and root mean square error of 0.836, 0.087 and 0.140 mm/h respectively.
NASA Astrophysics Data System (ADS)
Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.
2018-01-01
A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.
NASA Astrophysics Data System (ADS)
Buss, S.; Wernli, H.; Peter, T.; Kivi, R.; Bui, T. P.; Kleinböhl, A.; Schiller, C.
Stratospheric winter temperatures play a key role in the chain of microphysical and chemical processes that lead to the formation of polar stratospheric clouds (PSCs), chlorine activation and eventually to stratospheric ozone depletion. Here the tempera- ture conditions during the Arctic winters 1999/2000 and 2000/2001 are quantitatively investigated using observed profiles of water vapour and nitric acid, and tempera- tures from high-resolution radiosondes and aircraft observations, global ECMWF and UKMO analyses and mesoscale model simulations over Scandinavia and Greenland. The ECMWF model resolves parts of the gravity wave activity and generally agrees well with the observations. However, for the very cold temperatures near the ice frost point the ECMWF analyses have a warm bias of 1-6 K compared to radiosondes. For the mesoscale model HRM, this bias is generally reduced due to a more accurate rep- resentation of gravity waves. Quantitative estimates of the impact of the mesoscale temperature perturbations indicates that over Scandinavia and Greenland the wave- induced stratospheric cooling (as simulated by the HRM) affects only moderately the estimated chlorine activation and homogeneous NAT particle formation, but strongly enhances the potential for ice formation.
Quantitative analysis of benzodiazepines in vitreous humor by high-performance liquid chromatography
Bazmi, Elham; Behnoush, Behnam; Akhgari, Maryam; Bahmanabadi, Leila
2016-01-01
Objective: Benzodiazepines are frequently screened drugs in emergency toxicology, drugs of abuse testing, and in forensic cases. As the variations of benzodiazepines concentrations in biological samples during bleeding, postmortem changes, and redistribution could be biasing forensic medicine examinations, hence selecting a suitable sample and a validated accurate method is essential for the quantitative analysis of these main drug categories. The aim of this study was to develop a valid method for the determination of four benzodiazepines (flurazepam, lorazepam, alprazolam, and diazepam) in vitreous humor using liquid–liquid extraction and high-performance liquid chromatography. Methods: Sample preparation was carried out using liquid–liquid extraction with n-hexane: ethyl acetate and subsequent detection by high-performance liquid chromatography method coupled to diode array detector. This method was applied to quantify benzodiazepines in 21 authentic vitreous humor samples. Linear curve for each drug was obtained within the range of 30–3000 ng/mL with coefficient of correlation higher than 0.99. Results: The limit of detection and quantitation were 30 and 100 ng/mL respectively for four drugs. The method showed an appropriate intra- and inter-day precision (coefficient of variation < 10%). Benzodiazepines recoveries were estimated to be over 80%. The method showed high selectivity; no additional peak due to interfering substances in samples was observed. Conclusion: The present method was selective, sensitive, accurate, and precise for the quantitative analysis of benzodiazepines in vitreous humor samples in forensic toxicology laboratory. PMID:27635251
NASA Astrophysics Data System (ADS)
Niu, Xiaofeng; Ye, Hongwei; Xia, Ting; Asma, Evren; Winkler, Mark; Gagnon, Daniel; Wang, Wenli
2015-07-01
Quantitative PET imaging is widely used in clinical diagnosis in oncology and neuroimaging. Accurate normalization correction for the efficiency of each line-of- response is essential for accurate quantitative PET image reconstruction. In this paper, we propose a normalization calibration method by using the delayed-window coincidence events from the scanning phantom or patient. The proposed method could dramatically reduce the ‘ring’ artifacts caused by mismatched system count-rates between the calibration and phantom/patient datasets. Moreover, a modified algorithm for mean detector efficiency estimation is proposed, which could generate crystal efficiency maps with more uniform variance. Both phantom and real patient datasets are used for evaluation. The results show that the proposed method could lead to better uniformity in reconstructed images by removing ring artifacts, and more uniform axial variance profiles, especially around the axial edge slices of the scanner. The proposed method also has the potential benefit to simplify the normalization calibration procedure, since the calibration can be performed using the on-the-fly acquired delayed-window dataset.
Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A
2017-12-19
As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling
NASA Astrophysics Data System (ADS)
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-02-01
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling.
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-02-07
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-01-01
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp–Davis–Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations. PMID:26758496
NASA Technical Reports Server (NTRS)
1979-01-01
Satellites provide an excellent platform from which to observe crops on the scale and frequency required to provide accurate crop production estimates on a worldwide basis. Multispectral imaging sensors aboard these platforms are capable of providing data from which to derive acreage and production estimates. The issue of sensor swath width was examined. The quantitative trade trade necessary to resolve the combined issue of sensor swath width, number of platforms, and their orbits was generated and are included. Problems with different swath width sensors were analyzed and an assessment of system trade-offs of swath width versus number of satellites was made for achieving Global Crop Production Forecasting.
Oligomeric cationic polymethacrylates: a comparison of methods for determining molecular weight.
Locock, Katherine E S; Meagher, Laurence; Haeussler, Matthias
2014-02-18
This study compares three common laboratory methods, size-exclusion chromatography (SEC), (1)H nuclear magnetic resonance (NMR), and matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF), to determine the molecular weight of oligomeric cationic copolymers. The potential bias for each method was examined across a series of polymers that varied in molecular weight and cationic character (both choice of cation (amine versus guanidine) and relative proportion present). SEC was found to be the least accurate, overestimating Mn by an average of 140%, owing to the lack of appropriate cationic standards available, and the complexity involved in estimating the hydrodynamic volume of copolymers. MALDI-TOF approximated Mn well for the highly monodisperse (Đ < 1.1), low molecular weight (degree of polymerization (DP) <50) species but appeared unsuitable for the largest polymers in the series due to the mass bias associated with the technique. (1)H NMR was found to most accurately estimate Mn in this study, differing to theoretical values by only 5.2%. (1)H NMR end-group analysis is therefore an inexpensive and facile, primary quantitative method to estimate the molecular weight of oliogomeric cationic polymethacrylates if suitably distinct end-groups signals are present in the spectrum.
Rapid and accurate estimation of release conditions in the javelin throw.
Hubbard, M; Alaways, L W
1989-01-01
We have developed a system to measure initial conditions in the javelin throw rapidly enough to be used by the thrower for feedback in performance improvement. The system consists of three subsystems whose main tasks are: (A) acquisition of automatically digitized high speed (200 Hz) video x, y position data for the first 0.1-0.2 s of the javelin flight after release (B) estimation of five javelin release conditions from the x, y position data and (C) graphical presentation to the thrower of these release conditions and a simulation of the subsequent flight together with optimal conditions and flight for the sam release velocity. The estimation scheme relies on a simulation model and is at least an order of magnitude more accurate than previously reported measurements of javelin release conditions. The system provides, for the first time ever in any throwing event, the ability to critique nearly instantly in a precise, quantitative manner the crucial factors in the throw which determine the range. This should be expected to much greater control and consistency of throwing variables by athletes who use system and could even lead to an evolution of new throwing techniques.
Real-Time PCR Quantification Using A Variable Reaction Efficiency Model
Platts, Adrian E.; Johnson, Graham D.; Linnemann, Amelia K.; Krawetz, Stephen A.
2008-01-01
Quantitative real-time PCR remains a cornerstone technique in gene expression analysis and sequence characterization. Despite the importance of the approach to experimental biology the confident assignment of reaction efficiency to the early cycles of real-time PCR reactions remains problematic. Considerable noise may be generated where few cycles in the amplification are available to estimate peak efficiency. An alternate approach that uses data from beyond the log-linear amplification phase is explored with the aim of reducing noise and adding confidence to efficiency estimates. PCR reaction efficiency is regressed to estimate the per-cycle profile of an asymptotically departed peak efficiency, even when this is not closely approximated in the measurable cycles. The process can be repeated over replicates to develop a robust estimate of peak reaction efficiency. This leads to an estimate of the maximum reaction efficiency that may be considered primer-design specific. Using a series of biological scenarios we demonstrate that this approach can provide an accurate estimate of initial template concentration. PMID:18570886
Evaluation of spatial filtering on the accuracy of wheat area estimate
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Delima, A. M.
1982-01-01
A 3 x 3 pixel spatial filter for postclassification was used for wheat classification to evaluate the effects of this procedure on the accuracy of area estimation using LANDSAT digital data obtained from a single pass. Quantitative analyses were carried out in five test sites (approx 40 sq km each) and t tests showed that filtering with threshold values significantly decreased errors of commission and omission. In area estimation filtering improved the overestimate of 4.5% to 2.7% and the root-mean-square error decreased from 126.18 ha to 107.02 ha. Extrapolating the same procedure of automatic classification using spatial filtering for postclassification to the whole study area, the accuracy in area estimate was improved from the overestimate of 10.9% to 9.7%. It is concluded that when single pass LANDSAT data is used for crop identification and area estimation the postclassification procedure using a spatial filter provides a more accurate area estimate by reducing classification errors.
Validation of a quantitative magnetic resonance method for measuring human body composition.
Napolitano, Antonella; Miller, Sam R; Murgatroyd, Peter R; Coward, W Andrew; Wright, Antony; Finer, Nick; De Bruin, Tjerk W; Bullmore, Edward T; Nunez, Derek J
2008-01-01
To evaluate a novel quantitative magnetic resonance (QMR) methodology (EchoMRI-AH, Echo Medical Systems) for measurement of whole-body fat and lean mass in humans. We have studied (i) the in vitro accuracy and precision by measuring 18 kg Canola oil with and without 9 kg water (ii) the accuracy and precision of measures of simulated fat mass changes in human subjects (n = 10) and (iii) QMR fat and lean mass measurements compared to those obtained using the established 4-compartment (4-C) model method (n = 30). (i) QMR represented 18 kg of oil at 40 degrees C as 17.1 kg fat and 1 kg lean while at 30 degrees C 15.8 kg fat and 4.7 kg lean were reported. The s.d. of repeated estimates was 0.13 kg for fat and 0.23 kg for lean mass. Adding 9 kg of water reduced the fat estimates, increased misrepresentation of fat as lean, and degraded the precision. (ii) the simulated change in the fat mass of human volunteers was accurately represented, independently of added water. (iii) compared to the 4-C model, QMR underestimated fat and over-estimated lean mass. The extent of difference increased with body mass. The s.d. of repeated measurements increased with adiposity, from 0.25 kg (fat) and 0.51 kg (lean) with BMI <25 kg/m(2) to 0.43 kg and 0.81 kg respectively with BMI >30 kg/m(2). EchoMRI-AH prototype showed shortcomings in absolute accuracy and specificity of fat mass measures, but detected simulated body composition change accurately and with precision roughly three times better than current best measures. This methodology should reduce the study duration and cohort number needed to evaluate anti-obesity interventions.
Hunter, Margaret; Dorazio, Robert M.; Butterfield, John S.; Meigs-Friend, Gaia; Nico, Leo; Ferrante, Jason A.
2017-01-01
A set of universal guidelines is needed to determine the limit of detection (LOD) in PCR-based analyses of low concentration DNA. In particular, environmental DNA (eDNA) studies require sensitive and reliable methods to detect rare and cryptic species through shed genetic material in environmental samples. Current strategies for assessing detection limits of eDNA are either too stringent or subjective, possibly resulting in biased estimates of species’ presence. Here, a conservative LOD analysis grounded in analytical chemistry is proposed to correct for overestimated DNA concentrations predominantly caused by the concentration plateau, a nonlinear relationship between expected and measured DNA concentrations. We have used statistical criteria to establish formal mathematical models for both quantitative and droplet digital PCR. To assess the method, a new Grass Carp (Ctenopharyngodon idella) TaqMan assay was developed and tested on both PCR platforms using eDNA in water samples. The LOD adjustment reduced Grass Carp occupancy and detection estimates while increasing uncertainty – indicating that caution needs to be applied to eDNA data without LOD correction. Compared to quantitative PCR, digital PCR had higher occurrence estimates due to increased sensitivity and dilution of inhibitors at low concentrations. Without accurate LOD correction, species occurrence and detection probabilities based on eDNA estimates are prone to a source of bias that cannot be reduced by an increase in sample size or PCR replicates. Other applications also could benefit from a standardized LOD such as GMO food analysis, and forensic and clinical diagnostics.
Global estimates of shark catches using trade records from commercial markets.
Clarke, Shelley C; McAllister, Murdoch K; Milner-Gulland, E J; Kirkwood, G P; Michielsens, Catherine G J; Agnew, David J; Pikitch, Ellen K; Nakano, Hideki; Shivji, Mahmood S
2006-10-01
Despite growing concerns about overexploitation of sharks, lack of accurate, species-specific harvest data often hampers quantitative stock assessment. In such cases, trade studies can provide insights into exploitation unavailable from traditional monitoring. We applied Bayesian statistical methods to trade data in combination with genetic identification to estimate by species, the annual number of globally traded shark fins, the most commercially valuable product from a group of species often unrecorded in harvest statistics. Our results provide the first fishery-independent estimate of the scale of shark catches worldwide and indicate that shark biomass in the fin trade is three to four times higher than shark catch figures reported in the only global data base. Comparison of our estimates to approximated stock assessment reference points for one of the most commonly traded species, blue shark, suggests that current trade volumes in numbers of sharks are close to or possibly exceeding the maximum sustainable yield levels.
NASA Technical Reports Server (NTRS)
Frouin, Robert
1993-01-01
Current satellite algorithms to estimate photosynthetically available radiation (PAR) at the earth' s surface are reviewed. PAR is deduced either from an insolation estimate or obtained directly from top-of-atmosphere solar radiances. The characteristics of both approaches are contrasted and typical results are presented. The inaccuracies reported, about 10 percent and 6 percent on daily and monthly time scales, respectively, are useful to model oceanic and terrestrial primary productivity. At those time scales variability due to clouds in the ratio of PAR and insolation is reduced, making it possible to deduce PAR directly from insolation climatologies (satellite or other) that are currently available or being produced. Improvements, however, are needed in conditions of broken cloudiness and over ice/snow. If not addressed properly, calibration/validation issues may prevent quantitative use of the PAR estimates in studies of climatic change. The prospects are good for an accurate, long-term climatology of PAR over the globe.
Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds
Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.
2013-01-01
Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392
Krummen, David E; Patel, Mitul; Nguyen, Hong; Ho, Gordon; Kazi, Dhruv S; Clopton, Paul; Holland, Marian C; Greenberg, Scott L; Feld, Gregory K; Faddis, Mitchell N; Narayan, Sanjiv M
2010-11-01
Quantitative ECG Analysis. Optimal atrial tachyarrhythmia management is facilitated by accurate electrocardiogram interpretation, yet typical atrial flutter (AFl) may present without sawtooth F-waves or RR regularity, and atrial fibrillation (AF) may be difficult to separate from atypical AFl or rapid focal atrial tachycardia (AT). We analyzed whether improved diagnostic accuracy using a validated analysis tool significantly impacts costs and patient care. We performed a prospective, blinded, multicenter study using a novel quantitative computerized algorithm to identify atrial tachyarrhythmia mechanism from the surface ECG in patients referred for electrophysiology study (EPS). In 122 consecutive patients (age 60 ± 12 years) referred for EPS, 91 sustained atrial tachyarrhythmias were studied. ECGs were also interpreted by 9 physicians from 3 specialties for comparison and to allow healthcare system modeling. Diagnostic accuracy was compared to the diagnosis at EPS. A Markov model was used to estimate the impact of improved arrhythmia diagnosis. We found 13% of typical AFl ECGs had neither sawtooth flutter waves nor RR regularity, and were misdiagnosed by the majority of clinicians (0/6 correctly diagnosed by consensus visual interpretation) but correctly by quantitative analysis in 83% (5/6, P = 0.03). AF diagnosis was also improved through use of the algorithm (92%) versus visual interpretation (primary care: 76%, P < 0.01). Economically, we found that these improvements in diagnostic accuracy resulted in an average cost-savings of $1,303 and 0.007 quality-adjusted-life-years per patient. Typical AFl and AF are frequently misdiagnosed using visual criteria. Quantitative analysis improves diagnostic accuracy and results in improved healthcare costs and patient outcomes. © 2010 Wiley Periodicals, Inc.
Generalized PSF modeling for optimized quantitation in PET imaging.
Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman
2017-06-21
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.
Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression.
Zhen, Xiantong; Zhang, Heye; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo
2017-02-01
Cardiac four-chamber volume estimation serves as a fundamental and crucial role in clinical quantitative analysis of whole heart functions. It is a challenging task due to the huge complexity of the four chambers including great appearance variations, huge shape deformation and interference between chambers. Direct estimation has recently emerged as an effective and convenient tool for cardiac ventricular volume estimation. However, existing direct estimation methods were specifically developed for one single ventricle, i.e., left ventricle (LV), or bi-ventricles; they can not be directly used for four chamber volume estimation due to the great combinatorial variability and highly complex anatomical interdependency of the four chambers. In this paper, we propose a new, general framework for direct and simultaneous four chamber volume estimation. We have addressed two key issues, i.e., cardiac image representation and simultaneous four chamber volume estimation, which enables accurate and efficient four-chamber volume estimation. We generate compact and discriminative image representations by supervised descriptor learning (SDL) which can remove irrelevant information and extract discriminative features. We propose direct and simultaneous four-chamber volume estimation by the multioutput sparse latent regression (MSLR), which enables jointly modeling nonlinear input-output relationships and capturing four-chamber interdependence. The proposed method is highly generalized, independent of imaging modalities, which provides a general regression framework that can be extensively used for clinical data prediction to achieve automated diagnosis. Experiments on both MR and CT images show that our method achieves high performance with a correlation coefficient of up to 0.921 with ground truth obtained manually by human experts, which is clinically significant and enables more accurate, convenient and comprehensive assessment of cardiac functions. Copyright © 2016 Elsevier B.V. All rights reserved.
Willenburg, Elize; Divol, Benoit
2012-11-15
Quantitative PCR as a tool has been used to detect Brettanomyces bruxellensis directly from wine samples. Accurate and timely detection of this yeast is important to prevent unwanted spoilage of wines and beverages. The aim of this study was to distinguish differences between DNA and mRNA as template for the detection of this yeast. The study was also used to determine if it is possible to accurately detect cells in the viable but not culturable (VBNC) state of B. bruxellensis by qPCR. Several methods including traditional plating, epifluorescence counts and qPCR were used to amplify DNA and mRNA. It was observed that mRNA was a better template for the detection in terms of standard curve analysis and qPCR efficiencies. Various primers previously published were tested for their specificity, qPCR efficiency and accuracy of enumeration. A single primer set was selected which amplified a region of the actin-encoding gene. The detection limit for this assay was 10cellsmL(-1). B. bruxellensis could also be quantified in naturally contaminated wines with this assay. The mRNA gave a better indication of the viability of the cells which compared favourably to fluorescent microscopy and traditional cell counts. The ability of the assay to accurately estimate the number of cells in the VBNC state was also demonstrated. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Leser, William P.; Yuan, Fuh-Gwo; Leser, William P.
2013-01-01
A method of numerically estimating dynamic Green's functions using the finite element method is proposed. These Green's functions are accurate in a limited frequency range dependent on the mesh size used to generate them. This range can often match or exceed the frequency sensitivity of the traditional acoustic emission sensors. An algorithm is also developed to characterize an acoustic emission source by obtaining information about its strength and temporal dependence. This information can then be used to reproduce the source in a finite element model for further analysis. Numerical examples are presented that demonstrate the ability of the band-limited Green's functions approach to determine the moment tensor coefficients of several reference signals to within seven percent, as well as accurately reproduce the source-time function.
A general method for bead-enhanced quantitation by flow cytometry
Montes, Martin; Jaensson, Elin A.; Orozco, Aaron F.; Lewis, Dorothy E.; Corry, David B.
2009-01-01
Flow cytometry provides accurate relative cellular quantitation (percent abundance) of cells from diverse samples, but technical limitations of most flow cytometers preclude accurate absolute quantitation. Several quantitation standards are now commercially available which, when added to samples, permit absolute quantitation of CD4+ T cells. However, these reagents are limited by their cost, technical complexity, requirement for additional software and/or limited applicability. Moreover, few studies have validated the use of such reagents in complex biological samples, especially for quantitation of non-T cells. Here we show that addition to samples of known quantities of polystyrene fluorescence standardization beads permits accurate quantitation of CD4+ T cells from complex cell samples. This procedure, here termed single bead-enhanced cytofluorimetry (SBEC), was equally capable of enumerating eosinophils as well as subcellular fragments of apoptotic cells, moieties with very different optical and fluorescent characteristics. Relative to other proprietary products, SBEC is simple, inexpensive and requires no special software, suggesting that the method is suitable for the routine quantitation of most cells and other particles by flow cytometry. PMID:17067632
Liao, Yalin; Weber, Darren; Xu, Wei; Durbin-Johnson, Blythe P; Phinney, Brett S; Lönnerdal, Bo
2017-11-03
Whey proteins and caseins in breast milk provide bioactivities and also have different amino acid composition. Accurate determination of these two major protein classes provides a better understanding of human milk composition and function, and further aids in developing improved infant formulas based on bovine whey proteins and caseins. In this study, we implemented a LC-MS/MS quantitative analysis based on iBAQ label-free quantitation, to estimate absolute concentrations of α-casein, β-casein, and κ-casein in human milk samples (n = 88) collected between day 1 and day 360 postpartum. Total protein concentration ranged from 2.03 to 17.52 with a mean of 9.37 ± 3.65 g/L. Casein subunits ranged from 0.04 to 1.68 g/L (α-), 0.04 to 4.42 g/L (β-), and 0.10 to 1.72 g/L (α-), with β-casein having the highest average concentration among the three subunits. Calculated whey/casein ratio ranged from 45:55 to 97:3. Linear regression analyses show significant decreases in total protein, β-casein, κ-casein, total casein, and a significant increase of whey/casein ratio during the course of lactation. Our study presents a novel and accurate quantitative analysis of human milk casein content, demonstrating a lower casein content than earlier believed, which has implications for improved infants formulas.
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Crotta, M; Limon, G; Blake, D P; Guitian, J
2017-11-16
Toxoplasma gondii is recognized as a widely prevalent zoonotic parasite worldwide. Although several studies clearly identified meat products as an important source of T. gondii infections in humans, quantitative understanding of the risk posed to humans through the food chain is surprisingly scant. While probabilistic risk assessments for pathogens such as Campylobacter jejuni, Listeria monocytogenes or Escherichia coli have been well established, attempts to quantify the probability of human exposure to T. gondii through consumption of food products of animal origin are at early stages. The biological complexity of the life cycle of T. gondii and limited understanding of several fundamental aspects of the host/parasite interaction, require the adoption of numerous critical assumptions and significant simplifications. In this study, we present a hypothetical quantitative model for the assessment of human exposure to T. gondii through meat products. The model has been conceptualized to capture the dynamics leading to the presence of parasite in meat and, for illustrative purposes, used to estimate the probability of at least one viable cyst occurring in 100g of fresh pork meat in England. Available data, including the results of a serological survey in pigs raised in England were used as a starting point to implement a probabilistic model and assess the fate of the parasite along the food chain. Uncertainty distributions were included to describe and account for the lack of knowledge where necessary. To quantify the impact of the key model inputs, sensitivity and scenario analyses were performed. The overall probability of 100g of a hypothetical edible tissue containing at least 1 cyst was 5.54%. Sensitivity analysis indicated that the variables exerting the greater effect on the output mean were the number of cysts and number of bradyzoites per cyst. Under the best and the worst scenarios, the probability of a single portion of fresh pork meat containing at least 1 viable cyst resulted 1.14% and 9.97% indicating that the uncertainty and lack of data surrounding key input parameters of the model preclude accurate estimation of T. gondii exposure through consumption of meat products. The hypothetical model conceptualized here is coherent with current knowledge of the biology of the parasite. Simulation outputs clearly identify the key gaps in our knowledge of the host-parasite interaction that, when filled, will support quantitative assessments and much needed accurate estimates of the risk of human exposure. Copyright © 2017 Elsevier B.V. All rights reserved.
Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J
2012-02-09
The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.
A novel mesh processing based technique for 3D plant analysis
2012-01-01
Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969
Estimating the fates of organic contaminants in an aquifer using QSAR.
Lim, Seung Joo; Fox, Peter
2013-01-01
The quantitative structure activity relationship (QSAR) model, BIOWIN, was modified to more accurately estimate the fates of organic contaminants in an aquifer. The predictions from BIOWIN were modified to include oxidation and sorption effects. The predictive model therefore included the effects of sorption, biodegradation, and oxidation. A total of 35 organic compounds were used to validate the predictive model. The majority of the ratios of predicted half-life to measured half-life were within a factor of 2 and no ratio values were greater than a factor of 5. In addition, the accuracy of estimating the persistence of organic compounds in the sub-surface was superior when modified by the relative fraction adsorbed to the solid phase, 1/Rf, to that when modified by the remaining fraction of a given compound adsorbed to a solid, 1 - fs.
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Moore, S. C.
2001-06-01
Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.
Quantitating Organoleptic Volatile Phenols in Smoke-Exposed Vitis vinifera Berries.
Noestheden, Matthew; Thiessen, Katelyn; Dennis, Eric G; Tiet, Ben; Zandberg, Wesley F
2017-09-27
Accurate methods for quantitating volatile phenols (i.e., guaiacol, syringol, 4-ethylphenol, etc.) in smoke-exposed Vitis vinifera berries prior to fermentation are needed to predict the likelihood of perceptible smoke taint following vinification. Reported here is a complete, cross-validated analytical workflow to accurately quantitate free and glycosidically bound volatile phenols in smoke-exposed berries using liquid-liquid extraction, acid-mediated hydrolysis, and gas chromatography-tandem mass spectrometry. The reported workflow addresses critical gaps in existing methods for volatile phenols that impact quantitative accuracy, most notably the effect of injection port temperature and the variability in acid-mediated hydrolytic procedures currently used. Addressing these deficiencies will help the wine industry make accurate, informed decisions when producing wines from smoke-exposed berries.
Calibration-free assays on standard real-time PCR devices
Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr
2017-01-01
Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration. PMID:28327545
Calibration-free assays on standard real-time PCR devices
NASA Astrophysics Data System (ADS)
Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr
2017-03-01
Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration.
Estimating malaria transmission from humans to mosquitoes in a noisy landscape.
Reiner, Robert C; Guerra, Carlos; Donnelly, Martin J; Bousema, Teun; Drakeley, Chris; Smith, David L
2015-10-06
A basic quantitative understanding of malaria transmission requires measuring the probability a mosquito becomes infected after feeding on a human. Parasite prevalence in mosquitoes is highly age-dependent, and the unknown age-structure of fluctuating mosquito populations impedes estimation. Here, we simulate mosquito infection dynamics, where mosquito recruitment is modelled seasonally with fractional Brownian noise, and we develop methods for estimating mosquito infection rates. We find that noise introduces bias, but the magnitude of the bias depends on the 'colour' of the noise. Some of these problems can be overcome by increasing the sampling frequency, but estimates of transmission rates (and estimated reductions in transmission) are most accurate and precise if they combine parity, oocyst rates and sporozoite rates. These studies provide a basis for evaluating the adequacy of various entomological sampling procedures for measuring malaria parasite transmission from humans to mosquitoes and for evaluating the direct transmission-blocking effects of a vaccine. © 2015 The Authors.
Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen
2015-11-01
In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. Copyright © 2015 Elsevier Inc. All rights reserved.
A fast cross-validation method for alignment of electron tomography images based on Beer-Lambert law
Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen
2015-01-01
In electron tomography, accurate alignment of tilt series is an essential step in attaining high-resolution 3D reconstructions. Nevertheless, quantitative assessment of alignment quality has remained a challenging issue, even though many alignment methods have been reported. Here, we report a fast and accurate method, tomoAlignEval, based on the Beer-Lambert law, for the evaluation of alignment quality. Our method is able to globally estimate the alignment accuracy by measuring the goodness of log-linear relationship of the beam intensity attenuations at different tilt angles. Extensive tests with experimental data demonstrated its robust performance with stained and cryo samples. Our method is not only significantly faster but also more sensitive than measurements of tomogram resolution using Fourier shell correlation method (FSCe/o). From these tests, we also conclude that while current alignment methods are sufficiently accurate for stained samples, inaccurate alignments remain a major limitation for high resolution cryo-electron tomography. PMID:26455556
Image-derived input function with factor analysis and a-priori information.
Simončič, Urban; Zanotti-Fregonara, Paolo
2015-02-01
Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.
ITALICS: an algorithm for normalization and DNA copy number calling for Affymetrix SNP arrays.
Rigaill, Guillem; Hupé, Philippe; Almeida, Anna; La Rosa, Philippe; Meyniel, Jean-Philippe; Decraene, Charles; Barillot, Emmanuel
2008-03-15
Affymetrix SNP arrays can be used to determine the DNA copy number measurement of 11 000-500 000 SNPs along the genome. Their high density facilitates the precise localization of genomic alterations and makes them a powerful tool for studies of cancers and copy number polymorphism. Like other microarray technologies it is influenced by non-relevant sources of variation, requiring correction. Moreover, the amplitude of variation induced by non-relevant effects is similar or greater than the biologically relevant effect (i.e. true copy number), making it difficult to estimate non-relevant effects accurately without including the biologically relevant effect. We addressed this problem by developing ITALICS, a normalization method that estimates both biological and non-relevant effects in an alternate, iterative manner, accurately eliminating irrelevant effects. We compared our normalization method with other existing and available methods, and found that ITALICS outperformed these methods for several in-house datasets and one public dataset. These results were validated biologically by quantitative PCR. The R package ITALICS (ITerative and Alternative normaLIzation and Copy number calling for affymetrix Snp arrays) has been submitted to Bioconductor.
New Equation for Prediction of Martensite Start Temperature in High Carbon Ferrous Alloys
NASA Astrophysics Data System (ADS)
Park, Jihye; Shim, Jae-Hyeok; Lee, Seok-Jae
2018-02-01
Since previous equations fail to predict M S temperature of high carbon ferrous alloys, we first propose an equation for prediction of M S temperature of ferrous alloys containing > 2 wt pct C. The presence of carbides (Fe3C and Cr-rich M 7C3) is thermodynamically considered to estimate the C concentration in austenite. Especially, equations individually specialized for lean and high Cr alloys very accurately reproduce experimental results. The chemical driving force for martensitic transformation is quantitatively analyzed based on the calculation of T 0 temperature.
Two schemes for quantitative photoacoustic tomography based on Monte Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yubin; Yuan, Zhen, E-mail: zhenyuan@umac.mo
Purpose: The aim of this study was to develop novel methods for photoacoustically determining the optical absorption coefficient of biological tissues using Monte Carlo (MC) simulation. Methods: In this study, the authors propose two quantitative photoacoustic tomography (PAT) methods for mapping the optical absorption coefficient. The reconstruction methods combine conventional PAT with MC simulation in a novel way to determine the optical absorption coefficient of biological tissues or organs. Specifically, the authors’ two schemes were theoretically and experimentally examined using simulations, tissue-mimicking phantoms, ex vivo, and in vivo tests. In particular, the authors explored these methods using several objects withmore » different absorption contrasts embedded in turbid media and by using high-absorption media when the diffusion approximation was not effective at describing the photon transport. Results: The simulations and experimental tests showed that the reconstructions were quantitatively accurate in terms of the locations, sizes, and optical properties of the targets. The positions of the recovered targets were accessed by the property profiles, where the authors discovered that the off center error was less than 0.1 mm for the circular target. Meanwhile, the sizes and quantitative optical properties of the targets were quantified by estimating the full width half maximum of the optical absorption property. Interestingly, for the reconstructed sizes, the authors discovered that the errors ranged from 0 for relatively small-size targets to 26% for relatively large-size targets whereas for the recovered optical properties, the errors ranged from 0% to 12.5% for different cases. Conclusions: The authors found that their methods can quantitatively reconstruct absorbing objects of different sizes and optical contrasts even when the diffusion approximation is unable to accurately describe the photon propagation in biological tissues. In particular, their methods are able to resolve the intrinsic difficulties that occur when quantitative PAT is conducted by combining conventional PAT with the diffusion approximation or with radiation transport modeling.« less
NASA Astrophysics Data System (ADS)
Sadeghipour, N.; Davis, S. C.; Tichauer, K. M.
2017-01-01
New precision medicine drugs oftentimes act through binding to specific cell-surface cancer receptors, and thus their efficacy is highly dependent on the availability of those receptors and the receptor concentration per cell. Paired-agent molecular imaging can provide quantitative information on receptor status in vivo, especially in tumor tissue; however, to date, published approaches to paired-agent quantitative imaging require that only ‘trace’ levels of imaging agent exist compared to receptor concentration. This strict requirement may limit applicability, particularly in drug binding studies, which seek to report on a biological effect in response to saturating receptors with a drug moiety. To extend the regime over which paired-agent imaging may be used, this work presents a generalized simplified reference tissue model (GSRTM) for paired-agent imaging developed to approximate receptor concentration in both non-receptor-saturated and receptor-saturated conditions. Extensive simulation studies show that tumor receptor concentration estimates recovered using the GSRTM are more accurate in receptor-saturation conditions than the standard simple reference tissue model (SRTM) (% error (mean ± sd): GSRTM 0 ± 1 and SRTM 50 ± 1) and match the SRTM accuracy in non-saturated conditions (% error (mean ± sd): GSRTM 5 ± 5 and SRTM 0 ± 5). To further test the approach, GSRTM-estimated receptor concentration was compared to SRTM-estimated values extracted from tumor xenograft in vivo mouse model data. The GSRTM estimates were observed to deviate from the SRTM in tumors with low receptor saturation (which are likely in a saturated regime). Finally, a general ‘rule-of-thumb’ algorithm is presented to estimate the expected level of receptor saturation that would be achieved in a given tissue provided dose and pharmacokinetic information about the drug or imaging agent being used, and physiological information about the tissue. These studies suggest that the GSRTM is necessary when receptor saturation exceeds 20% and highlight the potential for GSRTM to accurately measure receptor concentrations under saturation conditions, such as might be required during high dose drug studies, or for imaging applications where high concentrations of imaging agent are required to optimize signal-to-noise conditions. This model can also be applied to PET and SPECT imaging studies that tend to suffer from noisier data, but require one less parameter to fit if images are converted to imaging agent concentration (quantitative PET/SPECT).
Chiò, A; Logroscino, G; Traynor, BJ; Collins, J; Simeone, JC; Goldstein, LA; White, LA
2014-01-01
Background Amyotrophic lateral sclerosis (ALS) is relatively rare, yet the economic and social burden is substantial. Having accurate incidence and prevalence estimates would facilitate efficient allocation of healthcare resources. Objective To provide a comprehensive and critical review of the epidemiologic literature on ALS. Methods MEDLINE and EMBASE (1995–2011) databases of population-based studies on ALS incidence and prevalence reporting quantitative data were analyzed. Data extracted included study location and time, design and data sources, case ascertainment methods, and incidence and/or prevalence rates. Medians and inter-quartile ranges (IQRs) were calculated, and ALS case estimates derived using 2010 population estimates. Results In all, 37 articles met inclusion criteria. In Europe, the median (IQR) incidence rate (/100,000 population) was 2.08 (1.47–2.43), corresponding to an estimated 15,355 (10,852–17,938) cases. Median (IQR) prevalence (/100,000 population) was 5.40 (4.06–7.89), or 39,863 (29,971–58,244) prevalent cases. Conclusions Disparity in rates among ALS incidence and prevalence studies may be due to differences in study design or true variations in population demographics, such as age, and geography, including environmental factors and genetic predisposition. Additional large-scale studies that use standardized case ascertainment methods are needed to more accurately assess the true global burden of ALS. PMID:23860588
Dulohery, Kate; Papavdi, Asteria; Michalodimitrakis, Manolis; Kranioti, Elena F
2012-11-01
Coronary artery atherosclerosis is a hugely prevalent condition in the Western World and is often encountered during autopsy. Atherosclerotic plaques can cause luminal stenosis: which, if over a significant level (75%), is said to contribute to cause of death. Estimation of stenosis can be macroscopically performed by the forensic pathologists at the time of autopsy or by microscopic examination. This study compares macroscopic estimation with quantitative microscopic image analysis with a particular focus on the assessment of significant stenosis (>75%). A total of 131 individuals were analysed. The sample consists of an atherosclerotic group (n=122) and a control group (n=9). The results of the two methods were significantly different from each other (p=0.001) and the macroscopic method gave a greater percentage stenosis by an average of 3.5%. Also, histological examination of coronary artery stenosis yielded a difference in significant stenosis in 11.5% of cases. The differences were attributed to either histological quantitative image analysis underestimation; gross examination overestimation; or, a combination of both. The underestimation may have come from tissue shrinkage during tissue processing for histological specimen. The overestimation from the macroscopic assessment can be attributed to the lumen shape, to the examiner observer error or to a possible bias to diagnose coronary disease when no other cause of death is apparent. The results indicate that the macroscopic estimation is open to more biases and that histological quantitative image analysis only gives a precise assessment of stenosis ex vivo. Once tissue shrinkage, if any, is accounted for then histological quantitative image analysis will yield a more accurate assessment of in vivo stenosis. It may then be considered a complementary tool for the examination of coronary stenosis. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
CLICK: The new USGS center for LIDAR information coordination and knowledge
Stoker, Jason M.; Greenlee, Susan K.; Gesch, Dean B.; Menig, Jordan C.
2006-01-01
Elevation data is rapidly becoming an important tool for the visualization and analysis of geographic information. The creation and display of three-dimensional models representing bare earth, vegetation, and structures have become major requirements for geographic research in the past few years. Light Detection and Ranging (lidar) has been increasingly accepted as an effective and accurate technology for acquiring high-resolution elevation data for bare earth, vegetation, and structures. Lidar is an active remote sensing system that records the distance, or range, of a laser fi red from an airborne or space borne platform such as an airplane, helicopter or satellite to objects or features on the Earth’s surface. By converting lidar data into bare ground topography and vegetation or structural morphologic information, extremely accurate, high-resolution elevation models can be derived to visualize and quantitatively represent scenes in three dimensions. In addition to high-resolution digital elevation models (Evans et al., 2001), other lidar-derived products include quantitative estimates of vegetative features such as canopy height, canopy closure, and biomass (Lefsky et al., 2002), and models of urban areas such as building footprints and three-dimensional city models (Maas, 2001).
Dias, Olívia Meira; Baldi, Bruno Guedes; Pennati, Francesca; Aliverti, Andrea; Chate, Rodrigo Caruso; Sawamura, Márcio Valente Yamada; Carvalho, Carlos Roberto Ribeiro de; Albuquerque, André Luis Pereira de
2018-01-01
Hypersensitivity pneumonitis (HP) is a disease with variable clinical presentation in which inflammation in the lung parenchyma is caused by the inhalation of specific organic antigens or low molecular weight substances in genetically susceptible individuals. Alterations of the acute, subacute and chronic forms may eventually overlap, and the diagnosis based on temporality and presence of fibrosis (acute/inflammatory HP vs. chronic HP) seems to be more feasible and useful in clinical practice. Differential diagnosis of chronic HP with other interstitial fibrotic diseases is challenging due to the overlap of the clinical history, and the functional and imaging findings of these pathologies in the terminal stages. Areas covered: This article reviews the essential features of HP with emphasis on imaging features. Moreover, the main methodological limitations of high-resolution computed tomography (HRCT) interpretation are discussed, as well as new perspectives with volumetric quantitative CT analysis as a useful tool for retrieving detailed and accurate information from the lung parenchyma. Expert commentary: Mosaic attenuation is a prominent feature of this disease, but air trapping in chronic HP seems overestimated. Quantitative analysis has the potential to estimate the involvement of the pulmonary parenchyma more accurately and could correlate better with pulmonary function results.
NASA Astrophysics Data System (ADS)
Shakeel, Hira; Haq, S. U.; Aisha, Ghulam; Nadeem, Ali
2017-06-01
The quantitative analysis of the standard aluminum-silicon alloy has been performed using calibration free laser induced breakdown spectroscopy (CF-LIBS). The plasma was produced using the fundamental harmonic (1064 nm) of the Nd: YAG laser and the emission spectra were recorded at 3.5 μs detector gate delay. The qualitative analysis of the emission spectra confirms the presence of Mg, Al, Si, Ti, Mn, Fe, Ni, Cu, Zn, Sn, and Pb in the alloy. The background subtracted and self-absorption corrected emission spectra were used for the estimation of plasma temperature as 10 100 ± 300 K. The plasma temperature and self-absorption corrected emission lines of each element have been used for the determination of concentration of each species present in the alloy. The use of corrected emission intensities and accurate evaluation of plasma temperature yield reliable quantitative analysis up to a maximum 2.2% deviation from reference sample concentration.
Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images
Frey, Eric C.; Humm, John L.; Ljungberg, Michael
2012-01-01
The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429
Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra
2015-01-01
Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150
Sampling for Soil Carbon Stock Assessment in Rocky Agricultural Soils
NASA Technical Reports Server (NTRS)
Beem-Miller, Jeffrey P.; Kong, Angela Y. Y.; Ogle, Stephen; Wolfe, David
2016-01-01
Coring methods commonly employed in soil organic C (SOC) stock assessment may not accurately capture soil rock fragment (RF) content or soil bulk density (rho (sub b)) in rocky agricultural soils, potentially biasing SOC stock estimates. Quantitative pits are considered less biased than coring methods but are invasive and often cost-prohibitive. We compared fixed-depth and mass-based estimates of SOC stocks (0.3-meters depth) for hammer, hydraulic push, and rotary coring methods relative to quantitative pits at four agricultural sites ranging in RF content from less than 0.01 to 0.24 cubic meters per cubic meter. Sampling costs were also compared. Coring methods significantly underestimated RF content at all rocky sites, but significant differences (p is less than 0.05) in SOC stocks between pits and corers were only found with the hammer method using the fixed-depth approach at the less than 0.01 cubic meters per cubic meter RF site (pit, 5.80 kilograms C per square meter; hammer, 4.74 kilograms C per square meter) and at the 0.14 cubic meters per cubic meter RF site (pit, 8.81 kilograms C per square meter; hammer, 6.71 kilograms C per square meter). The hammer corer also underestimated rho (sub b) at all sites as did the hydraulic push corer at the 0.21 cubic meters per cubic meter RF site. No significant differences in mass-based SOC stock estimates were observed between pits and corers. Our results indicate that (i) calculating SOC stocks on a mass basis can overcome biases in RF and rho (sub b) estimates introduced by sampling equipment and (ii) a quantitative pit is the optimal sampling method for establishing reference soil masses, followed by rotary and then hydraulic push corers.
Hunter, Margaret E; Dorazio, Robert M; Butterfield, John S S; Meigs-Friend, Gaia; Nico, Leo G; Ferrante, Jason A
2017-03-01
A set of universal guidelines is needed to determine the limit of detection (LOD) in PCR-based analyses of low-concentration DNA. In particular, environmental DNA (eDNA) studies require sensitive and reliable methods to detect rare and cryptic species through shed genetic material in environmental samples. Current strategies for assessing detection limits of eDNA are either too stringent or subjective, possibly resulting in biased estimates of species' presence. Here, a conservative LOD analysis grounded in analytical chemistry is proposed to correct for overestimated DNA concentrations predominantly caused by the concentration plateau, a nonlinear relationship between expected and measured DNA concentrations. We have used statistical criteria to establish formal mathematical models for both quantitative and droplet digital PCR. To assess the method, a new Grass Carp (Ctenopharyngodon idella) TaqMan assay was developed and tested on both PCR platforms using eDNA in water samples. The LOD adjustment reduced Grass Carp occupancy and detection estimates while increasing uncertainty-indicating that caution needs to be applied to eDNA data without LOD correction. Compared to quantitative PCR, digital PCR had higher occurrence estimates due to increased sensitivity and dilution of inhibitors at low concentrations. Without accurate LOD correction, species occurrence and detection probabilities based on eDNA estimates are prone to a source of bias that cannot be reduced by an increase in sample size or PCR replicates. Other applications also could benefit from a standardized LOD such as GMO food analysis and forensic and clinical diagnostics. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Accuracy of Blood Loss Measurement during Cesarean Delivery.
Doctorvaladan, Sahar V; Jelks, Andrea T; Hsieh, Eric W; Thurer, Robert L; Zakowski, Mark I; Lagrew, David C
2017-04-01
Objective This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland-Altman method. Results Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R 2 = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R 2 = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R 2 = 0.304). Conclusion During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes.
Accuracy of Blood Loss Measurement during Cesarean Delivery
Doctorvaladan, Sahar V.; Jelks, Andrea T.; Hsieh, Eric W.; Thurer, Robert L.; Zakowski, Mark I.; Lagrew, David C.
2017-01-01
Objective This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland–Altman method. Results Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R2 = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R2 = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R2 = 0.304). Conclusion During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes. PMID:28497007
Assessing non-additive effects in GBLUP model.
Vieira, I C; Dos Santos, J P R; Pires, L P M; Lima, B M; Gonçalves, F M A; Balestre, M
2017-05-10
Understanding non-additive effects in the expression of quantitative traits is very important in genotype selection, especially in species where the commercial products are clones or hybrids. The use of molecular markers has allowed the study of non-additive genetic effects on a genomic level, in addition to a better understanding of its importance in quantitative traits. Thus, the purpose of this study was to evaluate the behavior of the GBLUP model in different genetic models and relationship matrices and their influence on the estimates of genetic parameters. We used real data of the circumference at breast height in Eucalyptus spp and simulated data from a population of F 2 . Three commonly reported kinship structures in the literature were adopted. The simulation results showed that the inclusion of epistatic kinship improved prediction estimates of genomic breeding values. However, the non-additive effects were not accurately recovered. The Fisher information matrix for real dataset showed high collinearity in estimates of additive, dominant, and epistatic variance, causing no gain in the prediction of the unobserved data and convergence problems. Estimates presented differences of genetic parameters and correlations considering the different kinship structures. Our results show that the inclusion of non-additive effects can improve the predictive ability or even the prediction of additive effects. However, the high distortions observed in the variance estimates when the Hardy-Weinberg equilibrium assumption is violated due to the presence of selection or inbreeding can converge at zero gains in models that consider epistasis in genomic kinship.
Validation of Satellite-based Rainfall Estimates for Severe Storms (Hurricanes & Tornados)
NASA Astrophysics Data System (ADS)
Nourozi, N.; Mahani, S.; Khanbilvardi, R.
2005-12-01
Severe storms such as hurricanes and tornadoes cause devastating damages, almost every year, over a large section of the United States. More accurate forecasting intensity and track of a heavy storm can help to reduce if not to prevent its damages to lives, infrastructure, and economy. Estimating accurate high resolution quantitative precipitation (QPE) from a hurricane, required to improve the forecasting and warning capabilities, is still a challenging problem because of physical characteristics of the hurricane even when it is still over the ocean. Satellite imagery seems to be a valuable source of information for estimating and forecasting heavy precipitation and also flash floods, particularly for over the oceans where the traditional ground-based gauge and radar sources cannot provide any information. To improve the capability of a rainfall retrieval algorithm for estimating QPE of severe storms, its product is evaluated in this study. High (hourly 4km x 4km) resolutions satellite infrared-based rainfall products, from the NESDIS Hydro-Estimator (HE) and also PERSIANN (Precipitation Estimation from Remotely Sensed Information using an Artificial Neural Networks) algorithms, have been tested against NEXRAD stage-IV and rain gauge observations in this project. Three strong hurricanes: Charley (category 4), Jeanne (category 3), and Ivan (category 3) that caused devastating damages over Florida in the summer 2004, have been considered to be investigated. Preliminary results demonstrate that HE tends to underestimate rain rates when NEXRAD shows heavy storm (rain rates greater than 25 mm/hr) and to overestimate when NEXRAD gives low rainfall amounts, but PERSIANN tends to underestimate rain rates, in general.
A comparison of manual and quantitative elbow strength testing.
Shahgholi, Leili; Bengtson, Keith A; Bishop, Allen T; Shin, Alexander Y; Spinner, Robert J; Basford, Jeffrey R; Kaufman, Kenton R
2012-10-01
The aim of this study was to compare the clinical ratings of elbow strength obtained by skilled clinicians with objective strength measurement obtained through quantitative testing. A retrospective comparison of subject clinical records with quantitative strength testing results in a motion analysis laboratory was conducted. A total of 110 individuals between the ages of 8 and 65 yrs with traumatic brachial plexus injuries were identified. Patients underwent manual muscle strength testing as assessed on the 5-point British Medical Research Council Scale (5/5, normal; 0/5, absent) and quantitative elbow flexion and extension strength measurements. A total of 92 subjects had elbow flexion testing. Half of the subjects clinically assessed as having normal (5/5) elbow flexion strength on manual muscle testing exhibited less than 42% of their age-expected strength on quantitative testing. Eighty-four subjects had elbow extension strength testing. Similarly, half of those displaying normal elbow extension strength on manual muscle testing were found to have less than 62% of their age-expected values on quantitative testing. Significant differences between manual muscle testing and quantitative findings were not detected for the lesser (0-4) strength grades. Manual muscle testing, even when performed by experienced clinicians, may be more misleading than expected for subjects graded as having normal (5/5) strength. Manual muscle testing estimates for the lesser strength grades (1-4/5) seem reasonably accurate.
Inference for Stochastic Chemical Kinetics Using Moment Equations and System Size Expansion.
Fröhlich, Fabian; Thomas, Philipp; Kazeroonian, Atefeh; Theis, Fabian J; Grima, Ramon; Hasenauer, Jan
2016-07-01
Quantitative mechanistic models are valuable tools for disentangling biochemical pathways and for achieving a comprehensive understanding of biological systems. However, to be quantitative the parameters of these models have to be estimated from experimental data. In the presence of significant stochastic fluctuations this is a challenging task as stochastic simulations are usually too time-consuming and a macroscopic description using reaction rate equations (RREs) is no longer accurate. In this manuscript, we therefore consider moment-closure approximation (MA) and the system size expansion (SSE), which approximate the statistical moments of stochastic processes and tend to be more precise than macroscopic descriptions. We introduce gradient-based parameter optimization methods and uncertainty analysis methods for MA and SSE. Efficiency and reliability of the methods are assessed using simulation examples as well as by an application to data for Epo-induced JAK/STAT signaling. The application revealed that even if merely population-average data are available, MA and SSE improve parameter identifiability in comparison to RRE. Furthermore, the simulation examples revealed that the resulting estimates are more reliable for an intermediate volume regime. In this regime the estimation error is reduced and we propose methods to determine the regime boundaries. These results illustrate that inference using MA and SSE is feasible and possesses a high sensitivity.
Inference for Stochastic Chemical Kinetics Using Moment Equations and System Size Expansion
Thomas, Philipp; Kazeroonian, Atefeh; Theis, Fabian J.; Grima, Ramon; Hasenauer, Jan
2016-01-01
Quantitative mechanistic models are valuable tools for disentangling biochemical pathways and for achieving a comprehensive understanding of biological systems. However, to be quantitative the parameters of these models have to be estimated from experimental data. In the presence of significant stochastic fluctuations this is a challenging task as stochastic simulations are usually too time-consuming and a macroscopic description using reaction rate equations (RREs) is no longer accurate. In this manuscript, we therefore consider moment-closure approximation (MA) and the system size expansion (SSE), which approximate the statistical moments of stochastic processes and tend to be more precise than macroscopic descriptions. We introduce gradient-based parameter optimization methods and uncertainty analysis methods for MA and SSE. Efficiency and reliability of the methods are assessed using simulation examples as well as by an application to data for Epo-induced JAK/STAT signaling. The application revealed that even if merely population-average data are available, MA and SSE improve parameter identifiability in comparison to RRE. Furthermore, the simulation examples revealed that the resulting estimates are more reliable for an intermediate volume regime. In this regime the estimation error is reduced and we propose methods to determine the regime boundaries. These results illustrate that inference using MA and SSE is feasible and possesses a high sensitivity. PMID:27447730
Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.
Pillai, S; Singhvi, I
2008-09-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.
Quantitative Estimation of Itopride Hydrochloride and Rabeprazole Sodium from Capsule Formulation
Pillai, S.; Singhvi, I.
2008-01-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C18 column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies. PMID:21394269
Detector Position Estimation for PET Scanners.
Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul
2012-06-11
Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara; Reale, Oreste
2002-01-01
Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.
Quantifying and predicting Drosophila larvae crawling phenotypes
NASA Astrophysics Data System (ADS)
Günther, Maximilian N.; Nettesheim, Guilherme; Shubeita, George T.
2016-06-01
The fruit fly Drosophila melanogaster is a widely used model for cell biology, development, disease, and neuroscience. The fly’s power as a genetic model for disease and neuroscience can be augmented by a quantitative description of its behavior. Here we show that we can accurately account for the complex and unique crawling patterns exhibited by individual Drosophila larvae using a small set of four parameters obtained from the trajectories of a few crawling larvae. The values of these parameters change for larvae from different genetic mutants, as we demonstrate for fly models of Alzheimer’s disease and the Fragile X syndrome, allowing applications such as genetic or drug screens. Using the quantitative model of larval crawling developed here we use the mutant-specific parameters to robustly simulate larval crawling, which allows estimating the feasibility of laborious experimental assays and aids in their design.
An inverse approach to determining spatially varying arterial compliance using ultrasound imaging
NASA Astrophysics Data System (ADS)
Mcgarry, Matthew; Li, Ronny; Apostolakis, Iason; Nauleau, Pierre; Konofagou, Elisa E.
2016-08-01
The mechanical properties of arteries are implicated in a wide variety of cardiovascular diseases, many of which are expected to involve a strong spatial variation in properties that can be depicted by diagnostic imaging. A pulse wave inverse problem (PWIP) is presented, which can produce spatially resolved estimates of vessel compliance from ultrasound measurements of the vessel wall displacements. The 1D equations governing pulse wave propagation in a flexible tube are parameterized by the spatially varying properties, discrete cosine transform components of the inlet pressure boundary conditions, viscous loss constant and a resistance outlet boundary condition. Gradient descent optimization is used to fit displacements from the model to the measured data by updating the model parameters. Inversion of simulated data showed that the PWIP can accurately recover the correct compliance distribution and inlet pressure under realistic conditions, even under high simulated measurement noise conditions. Silicone phantoms with known compliance contrast were imaged with a clinical ultrasound system. The PWIP produced spatially and quantitatively accurate maps of the phantom compliance compared to independent static property estimates, and the known locations of stiff inclusions (which were as small as 7 mm). The PWIP is necessary for these phantom experiments as the spatiotemporal resolution, measurement noise and compliance contrast does not allow accurate tracking of the pulse wave velocity using traditional approaches (e.g. 50% upstroke markers). Results from simulations indicate reflections generated from material interfaces may negatively affect wave velocity estimates, whereas these reflections are accounted for in the PWIP and do not cause problems.
Recent Progress in the Remote Detection of Vapours and Gaseous Pollutants.
ERIC Educational Resources Information Center
Moffat, A. J.; And Others
Work has been continuing on the correlation spectrometry techniques described at previous remote sensing symposiums. Advances in the techniques are described which enable accurate quantitative measurements of diffused atmospheric gases to be made using controlled light sources, accurate quantitative measurements of gas clouds relative to…
The Sense of Confidence during Probabilistic Learning: A Normative Account.
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-06-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable "feeling of knowing" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process.
The Sense of Confidence during Probabilistic Learning: A Normative Account
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-01-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process. PMID:26076466
Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo
2014-05-01
Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.
Fakhri, Georges El
2011-01-01
82Rb cardiac PET allows the assessment of myocardial perfusion using a column generator in clinics that lack a cyclotron. We and others have previously shown that quantitation of myocardial blood flow (MBF) and coronary flow reserve (CFR) is feasible using dynamic 82Rb PET and factor and compartment analyses. The aim of the present work was to determine the intra- and inter-observer variability of MBF estimation using 82Rb PET as well as the reproducibility of our generalized factor + compartment analyses methodology to estimate MBF and assess its accuracy by comparing, in the same subjects, 82Rb estimates of MBF to those obtained using 13N-ammonia. Methods Twenty-two subjects were included in the reproducibility and twenty subjects in the validation study. Patients were injected with 60±5mCi of 82Rb and imaged dynamically for 6 minutes at rest and during dipyridamole stress Left and right ventricular (LV+RV) time-activity curves were estimated by GFADS and used as input to a 2-compartment kinetic analysis that estimates parametric maps of myocardial tissue extraction (K1) and egress (k2), as well as LV+RV contributions (fv,rv). Results Our results show excellent reproducibility of the quantitative dynamic approach itself with coefficients of repeatability of 1.7% for estimation of MBF at rest, 1.4% for MBF at peak stress and 2.8% for CFR estimation. The inter-observer reproducibility between the four observers that participated in this study was also very good with correlation coefficients greater than 0.87 between any two given observers when estimating coronary flow reserve. The reproducibility of MBF in repeated 82Rb studies was good at rest and excellent at peak stress (r2=0.835). Furthermore, the slope of the correlation line was very close to 1 when estimating stress MBF and CFR in repeated 82Rb studies. The correlation between myocardial flow estimates obtained at rest and during peak stress in 82Rb and 13N-ammonia studies was very good at rest (r2=0.843) and stress (r2=0.761). The Bland-Altman plots show no significant presence of proportional error at rest or stress, nor a dependence of the variations on the amplitude of the myocardial blood flow at rest or stress. A small systematic overestimation of 13N-ammonia MBF was observed with 82Rb at rest (0.129 ml/g/min) and the opposite, i.e., underestimation, at stress (0.22 ml/g/min). Conclusions Our results show that absolute quantitation of myocardial bloof flow is reproducible and accurate with 82Rb dynamic cardiac PET as compared to 13N-ammonia. The reproducibility of the quantitation approach itself was very good as well as inter-observer reproducibility. PMID:19525467
UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms
NASA Astrophysics Data System (ADS)
Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.
2016-01-01
An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.
Identification and Quantitation of Flavanols and Proanthocyanidins in Foods: How Good are the Datas?
Kelm, Mark A.; Hammerstone, John F.; Schmitz, Harold H.
2005-01-01
Evidence suggesting that dietary polyphenols, flavanols, and proanthocyanidins in particular offer significant cardiovascular health benefits is rapidly increasing. Accordingly, reliable and accurate methods are needed to provide qualitative and quantitative food composition data necessary for high quality epidemiological and clinical research. Measurements for flavonoids and proanthocyanidins have employed a range of analytical techniques, with various colorimetric assays still being popular for estimating total polyphenolic content in foods and other biological samples despite advances made with more sophisticated analyses. More crudely, estimations of polyphenol content as well as antioxidant activity are also reported with values relating to radical scavenging activity. High-performance liquid chromatography (HPLC) is the method of choice for quantitative analysis of individual polyphenols such as flavanols and proanthocyanidins. Qualitative information regarding proanthocyanidin structure has been determined by chemical methods such as thiolysis and by HPLC-mass spectrometry (MS) techniques at present. The lack of appropriate standards is the single most important factor that limits the aforementioned analyses. However, with ever expanding research in the arena of flavanols, proanthocyanidins, and health and the importance of their future inclusion in food composition databases, the need for standards becomes more critical. At present, sufficiently well-characterized standard material is available for selective flavanols and proanthocyanidins, and construction of at least a limited food composition database is feasible. PMID:15712597
Xu, Yihua; Pitot, Henry C
2006-03-01
In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.
Kamble, Bhagyashree; Gupta, Ankur; Patil, Dada; Janrao, Shirish; Khatal, Laxman; Duraiswamy, B
2013-02-01
Gymnema sylvestre, with gymnemic acids as active pharmacological constituents, is a popular ayurvedic herb and has been used to treat diabetes, as a remedy for cough and as a diuretic. However, very few analytical methods are available for quality control of this herb and its marketed formulations. To develop and validate a new, rapid, sensitive and selective HPLC-ESI (electrospray ionisation)-MS/MS method for quantitative estimation of gymnemagenin in G. sylvestre and its marketed formulations. HPLC-ESI-MS/MS method using a multiple reactions monitoring mode was used for quantitation of gymnemagenin. Separation was carried out on a Luna C-18 column using gradient elution of water and methanol (with 0.1% formic acid and 0.3% ammonia). The developed method was validated as per International Conference on Harmonisation Guideline ICH-Q2B and found to be accurate, precise and linear over a relatively wide range of concentrations (5.280-305.920 ng/mL). Gymnemagenin contents were found from 0.056 ± 0.002 to 4.77 ± 0.59% w/w in G. sylvestre and its marketed formulations. The method established is simple, rapid, with high sample throughput, and can be used as a tool for quality control of G. sylvestre and its formulations. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zargari, Abolfazl; Du, Yue; Thai, Theresa C.; Gunderson, Camille C.; Moore, Kathleen; Mannel, Robert S.; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2018-02-01
The objective of this study is to investigate the performance of global and local features to better estimate the characteristics of highly heterogeneous metastatic tumours, for accurately predicting the treatment effectiveness of the advanced stage ovarian cancer patients. In order to achieve this , a quantitative image analysis scheme was developed to estimate a total of 103 features from three different groups including shape and density, Wavelet, and Gray Level Difference Method (GLDM) features. Shape and density features are global features, which are directly applied on the entire target image; wavelet and GLDM features are local features, which are applied on the divided blocks of the target image. To assess the performance, the new scheme was applied on a retrospective dataset containing 120 recurrent and high grade ovary cancer patients. The results indicate that the three best performed features are skewness, root-mean-square (rms) and mean of local GLDM texture, indicating the importance of integrating local features. In addition, the averaged predicting performance are comparable among the three different categories. This investigation concluded that the local features contains at least as copious tumour heterogeneity information as the global features, which may be meaningful on improving the predicting performance of the quantitative image markers for the diagnosis and prognosis of ovary cancer patients.
Chen, Li-Li; Xu, Tian-Min; Jiang, Jiu-Hui; Zhang, Xing-Zhong; Lin, Jiu-Xiang
2008-12-01
The purpose of this study was to establish a quantitative cervical vertebral maturation (CVM) system for adolescents with normal occlusion. Mixed longitudinal data were used. The subjects included 87 children and adolescents from 8 to 18 years old with normal occlusion (32 boys, 55 girls) selected from 901 candidates. Sequential lateral cephalograms and hand-wrist films were taken once a year for 6 years. The lateral cephalograms of all subjects were divided into 11 maturation groups according to the Fishman skeletal maturity indicators. The morphologic characteristics of the second, third, and fourth cervical vertebrae at 11 developmental stages were measured and analyzed. Three characteristic parameters (H4/W4, AH3/PH3, @2) were selected to determine the classification of CVM. With 3 morphologic variables, the quantitative CVM system including 4 maturational stages was established. An equation that can accurately estimate the maturation of the cervical vertebrae was established: CVM stage=-4.13+3.57xH4/W4+4.07xAH3/PH3+0.03x@2. The quantitative CVM method is an efficient, objective, and relatively simple approach to assess the level of skeletal maturation during adolescence.
Analysis and Modeling of Ground Operations at Hub Airports
NASA Technical Reports Server (NTRS)
Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.
2000-01-01
Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.
Quantification of electrical field-induced flow reversal in a microchannel.
Pirat, C; Naso, A; van der Wouden, E J; Gardeniers, J G E; Lohse, D; van den Berg, A
2008-06-01
We characterize the electroosmotic flow in a microchannel with field effect flow control. High resolution measurements of the flow velocity, performed by micro particle image velocimetry, evidence the flow reversal induced by a local modification of the surface charge due to the presence of the gate. The shape of the microchannel cross-section is accurately extracted from these measurements. Experimental velocity profiles show a quantitative agreement with numerical results accounting for this exact shape. Analytical predictions assuming a rectangular cross-section are found to give a reasonable estimate of the velocity far enough from the walls.
NASA Astrophysics Data System (ADS)
Jablonski, A.
2018-01-01
Growing availability of synchrotron facilities stimulates an interest in quantitative applications of hard X-ray photoemission spectroscopy (HAXPES) using linearly polarized radiation. An advantage of this approach is the possibility of continuous variation of radiation energy that makes it possible to control the sampling depth for a measurement. Quantitative applications are based on accurate and reliable theory relating the measured spectral features to needed characteristics of the surface region of solids. A major complication in the case of polarized radiation is an involved structure of the photoemission cross-section for hard X-rays. In the present work, details of the relevant formalism are described and algorithms implementing this formalism for different experimental configurations are proposed. The photoelectron signal intensity may be considerably affected by variation in the positioning of the polarization vector with respect to the surface plane. This information is critical for any quantitative application of HAXPES by polarized X-rays. Different quantitative applications based on photoelectrons with energies up to 10 keV are considered here: (i) determination of surface composition, (ii) estimation of sampling depth, and (iii) measurements of an overlayer thickness. Parameters facilitating these applications (mean escape depths, information depths, effective attenuation lengths) were calculated for a number of photoelectron lines in four elemental solids (Si, Cu, Ag and Au) in different experimental configurations and locations of the polarization vector. One of the considered configurations, with polarization vector located in a plane perpendicular to the surface, was recommended for quantitative applications of HAXPES. In this configurations, it was found that the considered parameters vary weakly in the range of photoelectron emission angles from normal emission to about 50° with respect to the surface normal. The averaged values of the mean escape depth and effective attenuation length were approximated with accurate predictive formulas. The predicted effective attenuation lengths were compared with published values; major discrepancies observed can be ascribed to a possibility of discontinuous structure of the deposited overlayer.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software “Kongoh” for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1–4 persons’ contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI’s contribution in true contributors and non-contributors by using 2–4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI’s contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples. PMID:29149210
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.
A method for modeling bias in a person's estimates of likelihoods of events
NASA Technical Reports Server (NTRS)
Nygren, Thomas E.; Morera, Osvaldo
1988-01-01
It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.
Reilhac, Anthonin; Merida, Ines; Irace, Zacharie; Stephenson, Mary; Weekes, Ashley; Chen, Christopher; Totman, John; Townsend, David W; Fayad, Hadi; Costes, Nicolas
2018-04-13
Objective: Head motion occuring during brain PET studies leads to image blurring and to bias in measured local quantities. Our first objective was to implement an accurate list-mode-based rigid motion correction method for PET data acquired with the mMR synchronous Positron Emission Tomography/Magnetic Resonance (PET/MR) scanner. Our second objective was to optimize the correction for [ 11 C]-PIB scans using simulated and actual data with well-controlled motions. Results: An efficient list-mode based motion correction approach has been implemented, fully optimized and validated using simulated as well as actual PET data. The average spatial resolution loss induced by inaccuracies in motion parameter estimates as well as by the rebinning process was estimated to correspond to a 1 mm increase in Full Width Half Maximum (FWHM) with motion parameters estimated directly from the PET data with a temporal frequency of 20 secs. The results show that it can be safely applied to the [ 11 C]-PIB scans, allowing almost complete removal of motion induced artifacts.The application of the correction method on a large cohort of 11C-PIB scans led to the following observations: i) more than 21% of the scans were affected by a motion greater than 10 mm (39% for subjects with Mini-Mental State Examination -MMSE scores below 20) and ii), the correction led to quantitative changes in Alzheimer-specific cortical regions of up to 30%. Conclusion: The rebinner allows an accurate motion correction at a cost of minimal resolution reduction. The application of the correction to a large cohort of [ 11 C]-PIB scans confirmed the necessity to systematically correct for motion for quantitative results. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Krajbich, Ian; Rangel, Antonio
2011-08-16
How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng; Sun, Zhenglong; Qu, Xingda
2018-02-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial-lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior-posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng
2018-01-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial–lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior–posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly. PMID:29389857
Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.
2017-07-10
The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less
NASA Astrophysics Data System (ADS)
Michalik, Daniel; Lindegren, Lennart; Hobbs, David; Lammers, Uwe; Yamada, Yoshiyuki
2013-02-01
Starting in 2013, Gaia will deliver highly accurate astrometric data, which eventually will supersede most other stellar catalogues in accuracy and completeness. It is, however, limited to observations from magnitude 6 to 20 and will therefore not include the brightest stars. Nano-JASMINE, an ultrasmall Japanese astrometry satellite, will observe these bright stars, but with much lower accuracy. Hence, the Hipparcos catalogue from 1997 will likely remain the main source of accurate distances to bright nearby stars. We are investigating how this might be improved by optimally combining data from all three missions through a joint astrometric solution. This would take advantage of the unique features of each mission: the historic bright-star measurements of Hipparcos, the updated bright-star observations of Nano-JASMINE, and the very accurate reference frame of Gaia. The long temporal baseline between the missions provides additional benefits for the determination of proper motions and binary detection, which indirectly improve the parallax determination further. We present a quantitative analysis of the expected gains based on simulated data for all three missions.
NASA Astrophysics Data System (ADS)
Cifelli, R.; Chen, H.; Chandra, C. V.
2016-12-01
The San Francisco Bay area is home to over 5 million people. In February 2016, the area also hosted the NFL Super bowl, bringing additional people and focusing national attention to the region. Based on the El Nino forecast, public officials expressed concern for heavy rainfall and flooding with the potential for threats to public safety, costly flood damage to infrastructure, negative impacts to water quality (e.g., combined sewer overflows) and major disruptions in transportation. Mitigation of the negative impacts listed above requires accurate precipitation monitoring (quantitative precipitation estimation-QPE) and prediction (including radar nowcasting). The proximity to terrain and maritime conditions as well as the siting of existing NEXRAD radars are all challenges in providing accurate, short-term near surface rainfall estimates in the Bay area urban region. As part of a collaborative effort between the National Oceanic and Atmospheric Administration (NOAA) Earth System Research Laboratory, Colorado State University (CSU), and Santa Clara Valley Water District (SCVWD), an X-band dual-polarization radar was deployed in Santa Clara Valley in February of 2016 to provide support for the National Weather Service during the Super Bowl and NOAA's El Nino Rapid Response field campaign. This high-resolution radar was deployed on the roof of one of the buildings at the Penitencia Water Treatment Plant. The main goal was to provide detailed precipitation information for use in weather forecasting and assists the water district in their ability to predict rainfall and streamflow with real-time rainfall data over Santa Clara County especially during a potentially large El Nino year. The following figure shows the radar's coverage map, as well as sample reflectivity observations on March 06, 2016, at 00:04UTC. This paper presents results from a pilot study from February, 2016 to May, 2016 demonstrating the use of X-band weather radar for quantitative precipitation estimation (QPE) in the Bay Area. The radar rainfall products are evaluated with rain gauge observations collected by SCVWD. The comparison with gages show the excellent performance of X-band radar for rainfall monitoring in the Bay Area.
The role of lung imaging in pulmonary embolism
Mishkin, Fred S.; Johnson, Philip M.
1973-01-01
The advantages of lung scanning in suspected pulmonary embolism are its diagnostic sensitivity, simplicity and safety. The ability to delineate regional pulmonary ischaemia, to quantitate its extent and to follow its response to therapy provides valuable clinical data available by no other simple means. The negative scan effectively excludes pulmonary embolism but, although certain of its features favour the diagnosis of embolism, the positive scan inherently lacks specificity and requires angiographic confirmation when embolectomy, caval plication or infusion of a thrombolytic agent are contemplated. The addition of simple ventilation imaging techniques with radioxenon overcomes this limitation by providing accurate analog estimation or digital quantitation of regional ventilation: perfusion (V/Q) ratios fundamental to understanding the pathophysiologic consequences of embolism and other diseases of the lung. ImagesFig. 1Fig. 2Fig. 3Fig. 4Fig. 5Fig. 6Fig. 7p495-bFig. 8Fig. 9Fig. 10Fig. 11Fig. 12Fig. 13 PMID:4602128
Imaging spectroscopy of solar radio burst fine structures.
Kontar, E P; Yu, S; Kuznetsov, A A; Emslie, A G; Alcock, B; Jeffrey, N L S; Melnik, V N; Bian, N H; Subramanian, P
2017-11-15
Solar radio observations provide a unique diagnostic of the outer solar atmosphere. However, the inhomogeneous turbulent corona strongly affects the propagation of the emitted radio waves, so decoupling the intrinsic properties of the emitting source from the effects of radio wave propagation has long been a major challenge in solar physics. Here we report quantitative spatial and frequency characterization of solar radio burst fine structures observed with the Low Frequency Array, an instrument with high-time resolution that also permits imaging at scales much shorter than those corresponding to radio wave propagation in the corona. The observations demonstrate that radio wave propagation effects, and not the properties of the intrinsic emission source, dominate the observed spatial characteristics of radio burst images. These results permit more accurate estimates of source brightness temperatures, and open opportunities for quantitative study of the mechanisms that create the turbulent coronal medium through which the emitted radiation propagates.
Quantitative dose-response assessment of inhalation exposures to toxic air pollutants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarabek, A.M.; Foureman, G.L.; Gift, J.S.
1997-12-31
Implementation of the 1990 Clean Air Act Amendments, including evaluation of residual risks. requires accurate human health risk estimates of both acute and chronic inhalation exposures to toxic air pollutants. The U.S. Environmental Protection Agency`s National Center for Environmental Assessment, Research Triangle Park, NC, has a research program that addresses several key issues for development of improved quantitative approaches for dose-response assessment. This paper describes three projects underway in the program. Project A describes a Bayesian approach that was developed to base dose-response estimates on combined data sets and that expresses these estimates as probability density functions. A categorical regressionmore » model has been developed that allows for the combination of all available acute data, with toxicity expressed as severity categories (e.g., mild, moderate, severe), and with both duration and concentration as governing factors. Project C encompasses two refinements to uncertainty factors (UFs) often applied to extrapolate dose-response estimates from laboratory animal data to human equivalent concentrations. Traditional UFs have been based on analyses of oral administration and may not be appropriate for extrapolation of inhalation exposures. Refinement of the UF applied to account for the use of subchronic rather than chronic data was based on an analysis of data from inhalation exposures (Project C-1). Mathematical modeling using the BMD approach was used to calculate the dose-response estimates for comparison between the subchronic and chronic data so that the estimates were not subject to dose-spacing or sample size variability. The second UF that was refined for extrapolation of inhalation data was the adjustment for the use of a LOAEL rather than a NOAEL (Project C-2).« less
Using beta binomials to estimate classification uncertainty for ensemble models.
Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin
2014-01-01
Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.
Automatic detection and quantitative analysis of cells in the mouse primary motor cortex
NASA Astrophysics Data System (ADS)
Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui
2014-09-01
Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.
Spontaneous polyploidization in cucumber.
Ramírez-Madera, Axel O; Miller, Nathan D; Spalding, Edgar P; Weng, Yiqun; Havey, Michael J
2017-07-01
This is the first quantitative estimation of spontaneous polyploidy in cucumber and we detected 2.2% polyploids in a greenhouse study. We provide evidence that polyploidization is consistent with endoreduplication and is an on-going process during plant growth. Cucumber occasionally produces polyploid plants, which are problematic for growers because these plants produce misshaped fruits with non-viable seeds. In this study, we undertook the first quantitative study to estimate the relative frequency of spontaneous polyploids in cucumber. Seeds of recombinant inbred lines were produced in different environments, plants were grown in the field and greenhouse, and flow cytometry was used to establish ploidies. From 1422 greenhouse-grown plants, the overall relative frequency of spontaneous polyploidy was 2.2%. Plants possessed nuclei of different ploidies in the same leaves (mosaic) and on different parts of the same plant (chimeric). Our results provide evidence of endoreduplication and polysomaty in cucumber, and that it is an on-going and dynamic process. There was a significant effect (p = 0.018) of seed production environment on the occurrence of polyploid plants. Seed and seedling traits were not accurate predictors of eventual polyploids, and we recommend that cucumber producers rogue plants based on stature and leaf serration to remove potential polyploids.
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong
2016-05-01
With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.
Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe
2013-08-01
Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.
Jenkins, R H; Tuma, R; Juuti, J T; Bamford, D H; Thomas, G J
1999-01-01
A novel spectrophotometric method, based upon Raman spectroscopy, has been developed for accurate quantitative determination of nucleoside triphosphate phosphohydrolase (NTPase) activity. The method relies upon simultaneous measurement in real time of the intensities of Raman marker bands diagnostic of the triphosphate (1115 cm(-1)) and diphosphate (1085 cm(-1)) moieties of the NTPase substrate and product, respectively. The reliability of the method is demonstrated for the NTPase-active RNA-packaging enzyme (protein P4) of bacteriophage phi6, for which comparative NTPase activities have been estimated independently by radiolabeling assays. The Raman-determined rate for adenosine triphosphate substrate (8.6 +/- 1.3 micromol x mg(-1) x min(-1) at 40 degrees C) is in good agreement with previous estimates. The versatility of the Raman method is demonstrated by its applicability to a variety of nucleotide substrates of P4, including the natural ribonucleoside triphosphates (ATP, GTP) and dideoxynucleoside triphosphates (ddATP, ddGTP). Advantages of the present protocol include conservative sample requirements (approximately 10(-6) g enzyme/protocol) and relative ease of data collection and analysis. The latter conveniences are particularly advantageous for the measurement of activation energies of phosphohydrolase activity.
Wang, Ying Yi; Wang, Kai; Xu, Zuo Yu; Song, Yan; Wang, Chu Nan; Zhang, Chong Qing; Sun, Xi Lin; Shen, Bao Zhong
2017-01-01
Considering the general application of dedicated small-animal positron emission tomography/computed tomography is limited, an acceptable alternative in many situations might be clinical PET/CT. To estimate the feasibility of using clinical PET/CT with [F-18]-fluoro-2-deoxy-D-glucose for high-resolution dynamic imaging and quantitative analysis of cancer xenografts in nude mice. Dynamic clinical PET/CT scans were performed on xenografts for 60 min after injection with [F-18]-fluoro-2-deoxy-D-glucose. Scans were reconstructed with or without SharpIR method in two phases. And mice were sacrificed to extracting major organs and tumors, using ex vivo γ-counting as a reference. Strikingly, we observed that the image quality and the correlation between the all quantitive data from clinical PET/CT and the ex vivo counting was better with the SharpIR reconstructions than without. Our data demonstrate that clinical PET/CT scanner with SharpIR reconstruction is a valuable tool for imaging small animals in preclinical cancer research, offering dynamic imaging parameters, good image quality and accurate data quatification. PMID:28881772
Wang, Ying Yi; Wang, Kai; Xu, Zuo Yu; Song, Yan; Wang, Chu Nan; Zhang, Chong Qing; Sun, Xi Lin; Shen, Bao Zhong
2017-08-08
Considering the general application of dedicated small-animal positron emission tomography/computed tomography is limited, an acceptable alternative in many situations might be clinical PET/CT. To estimate the feasibility of using clinical PET/CT with [F-18]-fluoro-2-deoxy-D-glucose for high-resolution dynamic imaging and quantitative analysis of cancer xenografts in nude mice. Dynamic clinical PET/CT scans were performed on xenografts for 60 min after injection with [F-18]-fluoro-2-deoxy-D-glucose. Scans were reconstructed with or without SharpIR method in two phases. And mice were sacrificed to extracting major organs and tumors, using ex vivo γ-counting as a reference. Strikingly, we observed that the image quality and the correlation between the all quantitive data from clinical PET/CT and the ex vivo counting was better with the SharpIR reconstructions than without. Our data demonstrate that clinical PET/CT scanner with SharpIR reconstruction is a valuable tool for imaging small animals in preclinical cancer research, offering dynamic imaging parameters, good image quality and accurate data quatification.
Effect of Diffusion Limitations on Multianalyte Determination from Biased Biosensor Response
Baronas, Romas; Kulys, Juozas; Lančinskas, Algirdas; Žilinskas, Antanas
2014-01-01
The optimization-based quantitative determination of multianalyte concentrations from biased biosensor responses is investigated under internal and external diffusion-limited conditions. A computational model of a biocatalytic amperometric biosensor utilizing a mono-enzyme-catalyzed (nonspecific) competitive conversion of two substrates was used to generate pseudo-experimental responses to mixtures of compounds. The influence of possible perturbations of the biosensor signal, due to a white noise- and temperature-induced trend, on the precision of the concentration determination has been investigated for different configurations of the biosensor operation. The optimization method was found to be suitable and accurate enough for the quantitative determination of the concentrations of the compounds from a given biosensor transient response. The computational experiments showed a complex dependence of the precision of the concentration estimation on the relative thickness of the outer diffusion layer, as well as on whether the biosensor operates under diffusion- or kinetics-limited conditions. When the biosensor response is affected by the induced exponential trend, the duration of the biosensor action can be optimized for increasing the accuracy of the quantitative analysis. PMID:24608006
Budischak, Sarah A; Hoberg, Eric P; Abrams, Art; Jolles, Anna E; Ezenwa, Vanessa O
2015-09-01
Most hosts are concurrently or sequentially infected with multiple parasites; thus, fully understanding interactions between individual parasite species and their hosts depends on accurate characterization of the parasite community. For parasitic nematodes, noninvasive methods for obtaining quantitative, species-specific infection data in wildlife are often unreliable. Consequently, characterization of gastrointestinal nematode communities of wild hosts has largely relied on lethal sampling to isolate and enumerate adult worms directly from the tissues of dead hosts. The necessity of lethal sampling severely restricts the host species that can be studied, the adequacy of sample sizes to assess diversity, the geographic scope of collections and the research questions that can be addressed. Focusing on gastrointestinal nematodes of wild African buffalo, we evaluated whether accurate characterization of nematode communities could be made using a noninvasive technique that combined conventional parasitological approaches with molecular barcoding. To establish the reliability of this new method, we compared estimates of gastrointestinal nematode abundance, prevalence, richness and community composition derived from lethal sampling with estimates derived from our noninvasive approach. Our noninvasive technique accurately estimated total and species-specific worm abundances, as well as worm prevalence and community composition when compared to the lethal sampling method. Importantly, the rate of parasite species discovery was similar for both methods, and only a modest number of barcoded larvae (n = 10) were needed to capture key aspects of parasite community composition. Overall, this new noninvasive strategy offers numerous advantages over lethal sampling methods for studying nematode-host interactions in wildlife and can readily be applied to a range of study systems. © 2015 John Wiley & Sons Ltd.
Martin, L David; Ziegelstein, Roy C; Howell, Eric E; Martire, Carol; Hellmann, David B; Hirsch, Glenn A
2013-12-01
Access to hand-carried ultrasound technology for noncardiologists has increased significantly, yet development and evaluation of training programs are limited. We studied a focused program to teach hospitalists image acquisition of inferior vena cava (IVC) diameter and IVC collapsibility index with interpretation of estimated central venous pressure (CVP). Ten hospitalists completed an online educational module prior to attending a 1-day in-person training session that included directly supervised IVC imaging on volunteer subjects. In addition to making quantitative assessments, hospitalists were also asked to visually assess whether the IVC collapsed more than 50% during rapid inspiration or a sniff maneuver. Skills in image acquisition and interpretation were assessed immediately after training on volunteer patients and prerecorded images, and again on volunteer patients at least 6 weeks later. Eight of 10 hospitalists acquired adequate IVC images and interpreted them correctly on 5 of the 5 volunteer subjects and interpreted all 10 prerecorded images correctly at the end of the 1-day training session. At 7.4 ± 0.7 weeks (range, 6.9-8.6 weeks) follow-up, 9 of 10 hospitalists accurately acquired and interpreted all IVC images in 5 of 5 volunteers. Hospitalists were also able to accurately determine whether the IVC collapsibility index was more than 50% by visual assessment in 180 of 198 attempts (91% of the time). After a brief training program, hospitalists acquired adequate skills to perform and interpret hand-carried ultrasound IVC images and retained these skills in the near term. Though calculation of the IVC collapsibility index is more accurate, coupling a qualitative assessment with the IVC maximum diameter measurement may be acceptable in aiding bedside estimation of CVP. © 2013 Society of Hospital Medicine.
NASA Astrophysics Data System (ADS)
Koeppe, Robert Allen
Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.
NASA Astrophysics Data System (ADS)
Eck, Brendan L.; Fahmi, Rachid; Levi, Jacob; Fares, Anas; Wu, Hao; Li, Yuemeng; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.
2016-03-01
Myocardial perfusion imaging using CT (MPI-CT) has the potential to provide quantitative measures of myocardial blood flow (MBF) which can aid the diagnosis of coronary artery disease. We evaluated the quantitative accuracy of MPI-CT in a porcine model of balloon-induced LAD coronary artery ischemia guided by fractional flow reserve (FFR). We quantified MBF at baseline (FFR=1.0) and under moderate ischemia (FFR=0.7) using MPI-CT and compared to fluorescent microsphere-based MBF from high-resolution cryo-images. Dynamic, contrast-enhanced CT images were obtained using a spectral detector CT (Philips Healthcare). Projection-based mono-energetic images were reconstructed and processed to obtain MBF. Three MBF quantification approaches were evaluated: singular value decomposition (SVD) with fixed Tikhonov regularization (ThSVD), SVD with regularization determined by the L-Curve criterion (LSVD), and Johnson-Wilson parameter estimation (JW). The three approaches over-estimated MBF compared to cryo-images. JW produced the most accurate MBF, with average error 33.3+/-19.2mL/min/100g, whereas LSVD and ThSVD had greater over-estimation, 59.5+/-28.3mL/min/100g and 78.3+/-25.6 mL/min/100g, respectively. Relative blood flow as assessed by a flow ratio of LAD-to-remote myocardium was strongly correlated between JW and cryo-imaging, with R2=0.97, compared to R2=0.88 and 0.78 for LSVD and ThSVD, respectively. We assessed tissue impulse response functions (IRFs) from each approach for sources of error. While JW was constrained to physiologic solutions, both LSVD and ThSVD produced IRFs with non-physiologic properties due to noise. The L-curve provided noise-adaptive regularization but did not eliminate non-physiologic IRF properties or optimize for MBF accuracy. These findings suggest that model-based MPI-CT approaches may be more appropriate for quantitative MBF estimation and that cryo-imaging can support the development of MPI-CT by providing spatial distributions of MBF.
Varughese, Eunice A; Brinkman, Nichole E; Anneken, Emily M; Cashdollar, Jennifer L; Fout, G Shay; Furlong, Edward T; Kolpin, Dana W; Glassmeyer, Susan T; Keely, Scott P
2018-04-01
Drinking water treatment plants rely on purification of contaminated source waters to provide communities with potable water. One group of possible contaminants are enteric viruses. Measurement of viral quantities in environmental water systems are often performed using polymerase chain reaction (PCR) or quantitative PCR (qPCR). However, true values may be underestimated due to challenges involved in a multi-step viral concentration process and due to PCR inhibition. In this study, water samples were concentrated from 25 drinking water treatment plants (DWTPs) across the US to study the occurrence of enteric viruses in source water and removal after treatment. The five different types of viruses studied were adenovirus, norovirus GI, norovirus GII, enterovirus, and polyomavirus. Quantitative PCR was performed on all samples to determine presence or absence of these viruses in each sample. Ten DWTPs showed presence of one or more viruses in source water, with four DWTPs having treated drinking water testing positive. Furthermore, PCR inhibition was assessed for each sample using an exogenous amplification control, which indicated that all of the DWTP samples, including source and treated water samples, had some level of inhibition, confirming that inhibition plays an important role in PCR-based assessments of environmental samples. PCR inhibition measurements, viral recovery, and other assessments were incorporated into a Bayesian model to more accurately determine viral load in both source and treated water. Results of the Bayesian model indicated that viruses are present in source water and treated water. By using a Bayesian framework that incorporates inhibition, as well as many other parameters that affect viral detection, this study offers an approach for more accurately estimating the occurrence of viral pathogens in environmental waters. Published by Elsevier B.V.
Pirat, Bahar; Little, Stephen H; Igo, Stephen R; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J; Zoghbi, William A
2009-03-01
The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2pi r(2), and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA x aliasing velocity x time velocity integral of AR/peak AR velocity. Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 +/- 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption.
Marine, Rachel; McCarren, Coleen; Vorrasane, Vansay; Nasko, Dan; Crowgey, Erin; Polson, Shawn W; Wommack, K Eric
2014-01-30
Shotgun metagenomics has become an important tool for investigating the ecology of microorganisms. Underlying these investigations is the assumption that metagenome sequence data accurately estimates the census of microbial populations. Multiple displacement amplification (MDA) of microbial community DNA is often used in cases where it is difficult to obtain enough DNA for sequencing; however, MDA can result in amplification biases that may impact subsequent estimates of population census from metagenome data. Some have posited that pooling replicate MDA reactions negates these biases and restores the accuracy of population analyses. This assumption has not been empirically tested. Using mock viral communities, we examined the influence of pooling on population-scale analyses. In pooled and single reaction MDA treatments, sequence coverage of viral populations was highly variable and coverage patterns across viral genomes were nearly identical, indicating that initial priming biases were reproducible and that pooling did not alleviate biases. In contrast, control unamplified sequence libraries showed relatively even coverage across phage genomes. MDA should be avoided for metagenomic investigations that require quantitative estimates of microbial taxa and gene functional groups. While MDA is an indispensable technique in applications such as single-cell genomics, amplification biases cannot be overcome by combining replicate MDA reactions. Alternative library preparation techniques should be utilized for quantitative microbial ecology studies utilizing metagenomic sequencing approaches.
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
Estimation of diastolic intraventricular pressure gradients by Doppler M-mode echocardiography
NASA Technical Reports Server (NTRS)
Greenberg, N. L.; Vandervoort, P. M.; Firstenberg, M. S.; Garcia, M. J.; Thomas, J. D.
2001-01-01
Previous studies have shown that small intraventricular pressure gradients (IVPG) are important for efficient filling of the left ventricle (LV) and as a sensitive marker for ischemia. Unfortunately, there has previously been no way of measuring these noninvasively, severely limiting their research and clinical utility. Color Doppler M-mode (CMM) echocardiography provides a spatiotemporal velocity distribution along the inflow tract throughout diastole, which we hypothesized would allow direct estimation of IVPG by using the Euler equation. Digital CMM images, obtained simultaneously with intracardiac pressure waveforms in six dogs, were processed by numerical differentiation for the Euler equation, then integrated to estimate IVPG and the total (left atrial to left ventricular apex) pressure drop. CMM-derived estimates agreed well with invasive measurements (IVPG: y = 0.87x + 0.22, r = 0.96, P < 0.001, standard error of the estimate = 0.35 mmHg). Quantitative processing of CMM data allows accurate estimation of IVPG and tracking of changes induced by beta-adrenergic stimulation. This novel approach provides unique information on LV filling dynamics in an entirely noninvasive way that has previously not been available for assessment of diastolic filling and function.
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.
Donné, Simon; Goossens, Bart; Philips, Wilfried
2017-08-23
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.
A convenient method for X-ray analysis in TEM that measures mass thickness and composition
NASA Astrophysics Data System (ADS)
Statham, P.; Sagar, J.; Holland, J.; Pinard, P.; Lozano-Perez, S.
2018-01-01
We consider a new approach for quantitative analysis in transmission electron microscopy (TEM) that offers the same convenience as single-standard quantitative analysis in scanning electron microscopy (SEM). Instead of a bulk standard, a thin film with known mass thickness is used as a reference. The procedure involves recording an X-ray spectrum from the reference film for each session of acquisitions on real specimens. There is no need to measure the beam current; the current only needs to be stable for the duration of the session. A new reference standard with a large (1 mm x 1 mm) area of uniform thickness of 100 nm silicon nitride is used to reveal regions of X-ray detector occlusion that would give misleading results for any X-ray method that measures thickness. Unlike previous methods, the new X-ray method does not require an accurate beam current monitor but delivers equivalent accuracy in mass thickness measurement. Quantitative compositional results are also automatically corrected for specimen self-absorption. The new method is tested using a wedge specimen of Inconel 600 that is used to calibrate the high angle angular dark field (HAADF) signal to provide a thickness reference and results are compared with electron energy-loss spectrometry (EELS) measurements. For the new X-ray method, element composition results are consistent with the expected composition for the alloy and the mass thickness measurement is shown to provide an accurate alternative to EELS for thickness determination in TEM without the uncertainty associated with mean free path estimates.
RIVER LEVEL ESTIMATION USING ARTIFICIAL NEURAL NETWORK FOR URBAN SMALL RIVER IN TIDAL REACH
NASA Astrophysics Data System (ADS)
Takasaki, Tadakatsu; Kawamura, Akira; Amaguchi, Hideo
Prediction of water level in small rivers is great interest for flood control in an urban area located in the river mouth. The tidal river water level is affected by not only flood discharge but also tide, atmospheric pressure, wind direction and speed. We propose a method of estimating river water level considering these factors using an artificial neural network model for the Kanda River located in the center of Tokyo. The effects by those factors are quantitatively investigated. As for the effects by the atmospheric pressure, river water level rises about 7cm per 5hPa increase of the pressure regardless of river discharge under the conditions of 1m/s wind speed and north wind direction. The accurate rating curve for the tidal river is finally obtained.
A Probabilistic Method for Estimation of Bowel Wall Thickness in MR Colonography
Menys, Alex; Jaffer, Asif; Bhatnagar, Gauraang; Punwani, Shonit; Atkinson, David; Halligan, Steve; Hawkes, David J.; Taylor, Stuart A.
2017-01-01
MRI has recently been applied as a tool to quantitatively evaluate the response to therapy in patients with Crohn’s disease, and is the preferred choice for repeated imaging. Bowel wall thickness on MRI is an important biomarker of underlying inflammatory activity, being abnormally increased in the acute phase and reducing in response to successful therapy; however, a poor level of interobserver agreement of measured thickness is reported and therefore a system for accurate, robust and reproducible measurements is desirable. We propose a novel method for estimating bowel wall-thickness to improve the poor interobserver agreement of the manual procedure. We show that the variability of wall thickness measurement between the algorithm and observer measurements (0.25mm ± 0.81mm) has differences which are similar to observer variability (0.16mm ± 0.64mm). PMID:28072831
Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Diefenbach, Maximilian N; Baum, Thomas; Haase, Axel; Rummeny, Ernst J; Hu, Houchun H; Karampinos, Dimitrios C
2017-09-01
To propose a phase error correction scheme for monopolar time-interleaved multi-echo gradient echo water-fat imaging that allows accurate and robust complex-based quantification of the proton density fat fraction (PDFF). A three-step phase correction scheme is proposed to address a) a phase term induced by echo misalignments that can be measured with a reference scan using reversed readout polarity, b) a phase term induced by the concomitant gradient field that can be predicted from the gradient waveforms, and c) a phase offset between time-interleaved echo trains. Simulations were carried out to characterize the concomitant gradient field-induced PDFF bias and the performance estimating the phase offset between time-interleaved echo trains. Phantom experiments and in vivo liver and thigh imaging were performed to study the relevance of each of the three phase correction steps on PDFF accuracy and robustness. The simulation, phantom, and in vivo results showed in agreement with the theory an echo time-dependent PDFF bias introduced by the three phase error sources. The proposed phase correction scheme was found to provide accurate PDFF estimation independent of the employed echo time combination. Complex-based time-interleaved water-fat imaging was found to give accurate and robust PDFF measurements after applying the proposed phase error correction scheme. Magn Reson Med 78:984-996, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc
2012-11-01
Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.
Quantitative aspects of inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Bulska, Ewa; Wagner, Barbara
2016-10-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.
NASA Astrophysics Data System (ADS)
Smallwood, John R.
2018-01-01
Charles Hutton suggested in 1821 that the pyramids of Egypt be used to site an experiment to measure the deflection of the vertical by a large mass. The suggestion arose as he had estimated the attraction of a Scottish mountain as part of Nevil Maskelyne's (1774) "Schiehallion Experiment", a demonstration of Isaac Newton's law of gravitational attraction and the earliest reasonable quantitative estimate of Earth's mean density. I present a virtual realization of an experiment at the Giza pyramids to investigate how Hutton's concept might have emerged had it been undertaken as he suggested. The attraction of the Great Pyramid would have led to inward north-south deflections of the vertical totalling 1.8 arcsec (0.0005°), and east-west deflections totalling 2.0 arcsec (0.0006°), which although small, would have been within the contemporaneous detectable range, and potentially given, as Hutton wished, a more accurate Earth density measurement than he reported from the Schiehallion experiment.
Stability basin estimates fall risk from observed kinematics, demonstrated on the Sit-to-Stand task.
Shia, Victor; Moore, Talia Yuki; Holmes, Patrick; Bajcsy, Ruzena; Vasudevan, Ram
2018-04-27
The ability to quantitatively measure stability is essential to ensuring the safety of locomoting systems. While the response to perturbation directly reflects the stability of a motion, this experimental method puts human subjects at risk. Unfortunately, existing indirect methods for estimating stability from unperturbed motion have been shown to have limited predictive power. This paper leverages recent advances in dynamical systems theory to accurately estimate the stability of human motion without requiring perturbation. This approach relies on kinematic observations of a nominal Sit-to-Stand motion to construct an individual-specific dynamic model, input bounds, and feedback control that are then used to compute the set of perturbations from which the model can recover. This set, referred to as the stability basin, was computed for 14 individuals, and was able to successfully differentiate between less and more stable Sit-to-Stand strategies for each individual with greater accuracy than existing methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rayne, Sierra; Forest, Kaya; Friesen, Ken J
2009-08-01
A quantitative structure-activity model has been validated for estimating congener specific gas-phase hydroxyl radical reaction rates for perfluoroalkyl sulfonic acids (PFSAs), carboxylic acids (PFCAs), aldehydes (PFAls) and dihydrates, fluorotelomer olefins (FTOls), alcohols (FTOHs), aldehydes (FTAls), and acids (FTAcs), and sulfonamides (SAs), sulfonamidoethanols (SEs), and sulfonamido carboxylic acids (SAAs), and their alkylated derivatives based on calculated semi-empirical PM6 method ionization potentials. Corresponding gas-phase reaction rates with nitrate radicals and ozone have also been estimated using the computationally derived ionization potentials. Henry's law constants for these classes of perfluorinated compounds also appear to be reasonably approximated by the SPARC software program, thereby allowing estimation of wet and dry atmospheric deposition rates. Both congener specific gas-phase atmospheric and air-water interface fractionation of these compounds is expected, complicating current source apportionment perspectives and necessitating integration of such differential partitioning influences into future multimedia models. The findings will allow development and refinement of more accurate and detailed local through global scale atmospheric models for the atmospheric fate of perfluoroalkyl compounds.
SymPS: BRDF Symmetry Guided Photometric Stereo for Shape and Light Source Estimation.
Lu, Feng; Chen, Xiaowu; Sato, Imari; Sato, Yoichi
2018-01-01
We propose uncalibrated photometric stereo methods that address the problem due to unknown isotropic reflectance. At the core of our methods is the notion of "constrained half-vector symmetry" for general isotropic BRDFs. We show that such symmetry can be observed in various real-world materials, and it leads to new techniques for shape and light source estimation. Based on the 1D and 2D representations of the symmetry, we propose two methods for surface normal estimation; one focuses on accurate elevation angle recovery for surface normals when the light sources only cover the visible hemisphere, and the other for comprehensive surface normal optimization in the case that the light sources are also non-uniformly distributed. The proposed robust light source estimation method also plays an essential role to let our methods work in an uncalibrated manner with good accuracy. Quantitative evaluations are conducted with both synthetic and real-world scenes, which produce the state-of-the-art accuracy for all of the non-Lambertian materials in MERL database and the real-world datasets.
Remote sensing for grassland management in the arid Southwest
Marsett, R.C.; Qi, J.; Heilman, P.; Biedenbender, S.H.; Watson, M.C.; Amer, S.; Weltz, M.; Goodrich, D.; Marsett, R.
2006-01-01
We surveyed a group of rangeland managers in the Southwest about vegetation monitoring needs on grassland. Based on their responses, the objective of the RANGES (Rangeland Analysis Utilizing Geospatial Information Science) project was defined to be the accurate conversion of remotely sensed data (satellite imagery) to quantitative estimates of total (green and senescent) standing cover and biomass on grasslands and semidesert grasslands. Although remote sensing has been used to estimate green vegetation cover, in arid grasslands herbaceous vegetation is senescent much of the year and is not detected by current remote sensing techniques. We developed a ground truth protocol compatible with both range management requirements and Landsat's 30 m resolution imagery. The resulting ground-truth data were then used to develop image processing algorithms that quantified total herbaceous vegetation cover, height, and biomass. Cover was calculated based on a newly developed Soil Adjusted Total Vegetation Index (SATVI), and height and biomass were estimated based on reflectance in the near infrared (NIR) band. Comparison of the remotely sensed estimates with independent ground measurements produced r2 values of 0.80, 0.85, and 0.77 and Nash Sutcliffe values of 0.78, 0.70, and 0.77 for the cover, plant height, and biomass, respectively. The approach for estimating plant height and biomass did not work for sites where forbs comprised more than 30% of total vegetative cover. The ground reconnaissance protocol and image processing techniques together offer land managers accurate and timely methods for monitoring extensive grasslands. The time-consuming requirement to collect concurrent data in the field for each image implies a need to share the high fixed costs of processing an image across multiple users to reduce the costs for individual rangeland managers.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
NASA Astrophysics Data System (ADS)
Chopra, Shruti; Motwani, Sanjay K.; Ahmad, Farhan J.; Khar, Roop K.
2007-11-01
Simple, accurate, reproducible, selective, sensitive and cost effective UV-spectrophotometric methods were developed and validated for the estimation of trigonelline in bulk and pharmaceutical formulations. Trigonelline was estimated at 265 nm in deionised water and at 264 nm in phosphate buffer (pH 4.5). Beer's law was obeyed in the concentration ranges of 1-20 μg mL -1 ( r2 = 0.9999) in deionised water and 1-24 μg mL -1 ( r2 = 0.9999) in the phosphate buffer medium. The apparent molar absorptivity and Sandell's sensitivity coefficient were found to be 4.04 × 10 3 L mol -1 cm -1 and 0.0422 μg cm -2/0.001A in deionised water; and 3.05 × 10 3 L mol -1 cm -1 and 0.0567 μg cm -2/0.001A in phosphate buffer media, respectively. These methods were tested and validated for various parameters according to ICH guidelines. The detection and quantitation limits were found to be 0.12 and 0.37 μg mL -1 in deionised water and 0.13 and 0.40 μg mL -1 in phosphate buffer medium, respectively. The proposed methods were successfully applied for the determination of trigonelline in pharmaceutical formulations (vaginal tablets and bioadhesive vaginal gels). The results demonstrated that the procedure is accurate, precise, specific and reproducible (percent relative standard deviation <2%), while being simple and less time consuming and hence can be suitably applied for the estimation of trigonelline in different dosage forms and dissolution studies.
Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan
2016-08-01
A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.
Agashiwala, Rajiv M; Louis, Elan D; Hof, Patrick R; Perl, Daniel P
2008-10-21
Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well.
Agashiwala, Rajiv M.; Louis, Elan D.; Hof, Patrick R.; Perl, Daniel P.
2010-01-01
Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well. PMID:18725208
ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.
Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P
2016-11-01
ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P < 0.0001). 394 (61.2%) of physicians' estimates about the percentage probability of post-thrombolysis symptomatic intracranial haemorrhage were accurate compared with 583 (90.5%) of SEDAN score estimates (P < 0.0001). 160 (24.8%) of physicians' estimates about post-thrombolysis 3-month percentage probability of mRS 0-2 were accurate compared with 240 (37.3%) DRAGON score estimates (P < 0.0001). 260 (40.4%) of physicians' estimates about the percentage probability of post-thrombolysis mRS 5-6 were accurate compared with 518 (80.4%) DRAGON score estimates (P < 0.0001). ASTRAL, DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.
Quantitative aspects of inductively coupled plasma mass spectrometry
Wagner, Barbara
2016-01-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971
Reliability of digital reactor protection system based on extenics.
Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng
2016-01-01
After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.
Non-equilibrium thermionic electron emission for metals at high temperatures
NASA Astrophysics Data System (ADS)
Domenech-Garret, J. L.; Tierno, S. P.; Conde, L.
2015-08-01
Stationary thermionic electron emission currents from heated metals are compared against an analytical expression derived using a non-equilibrium quantum kappa energy distribution for the electrons. The latter depends on the temperature decreasing parameter κ ( T ) , which decreases with increasing temperature and can be estimated from raw experimental data and characterizes the departure of the electron energy spectrum from equilibrium Fermi-Dirac statistics. The calculations accurately predict the measured thermionic emission currents for both high and moderate temperature ranges. The Richardson-Dushman law governs electron emission for large values of kappa or equivalently, moderate metal temperatures. The high energy tail in the electron energy distribution function that develops at higher temperatures or lower kappa values increases the emission currents well over the predictions of the classical expression. This also permits the quantitative estimation of the departure of the metal electrons from the equilibrium Fermi-Dirac statistics.
Li, Jing; Wang, Min-Yan; Zhang, Jian; He, Wan-Qing; Nie, Lei; Shao, Xia
2013-12-01
VOCs emission from petrochemical storage tanks is one of the important emission sources in the petrochemical industry. In order to find out the VOCs emission amount of petrochemical storage tanks, Tanks 4.0.9d model is utilized to calculate the VOCs emission from different kinds of storage tanks. VOCs emissions from a horizontal tank, a vertical fixed roof tank, an internal floating roof tank and an external floating roof tank were calculated as an example. The consideration of the site meteorological information, the sealing information, the tank content information and unit conversion by using Tanks 4.0.9d model in China was also discussed. Tanks 4.0.9d model can be used to estimate VOCs emissions from petrochemical storage tanks in China as a simple and highly accurate method.
Large-scale structure non-Gaussianities with modal methods
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel
2016-10-01
Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
MRI volumetry of prefrontal cortex
NASA Astrophysics Data System (ADS)
Sheline, Yvette I.; Black, Kevin J.; Lin, Daniel Y.; Pimmel, Joseph; Wang, Po; Haller, John W.; Csernansky, John G.; Gado, Mokhtar; Walkup, Ronald K.; Brunsden, Barry S.; Vannier, Michael W.
1995-05-01
Prefrontal cortex volumetry by brain magnetic resonance (MR) is required to estimate changes postulated to occur in certain psychiatric and neurologic disorders. A semiautomated method with quantitative characterization of its performance is sought to reliably distinguish small prefrontal cortex volume changes within individuals and between groups. Stereological methods were tested by a blinded comparison of measurements applied to 3D MR scans obtained using an MPRAGE protocol. Fixed grid stereologic methods were used to estimate prefrontal cortex volumes on a graphic workstation, after the images are scaled from 16 to 8 bits using a histogram method. In addition images were resliced into coronal sections perpendicular to the bicommissural plane. Prefrontal cortex volumes were defined as all sections of the frontal lobe anterior to the anterior commissure. Ventricular volumes were excluded. Stereological measurement yielded high repeatability and precision, and was time efficient for the raters. The coefficient of error was
NASA Astrophysics Data System (ADS)
Li, Xia; Welch, E. Brian; Arlinghaus, Lori R.; Bapsi Chakravarthy, A.; Xu, Lei; Farley, Jaime; Loveless, Mary E.; Mayer, Ingrid A.; Kelley, Mark C.; Meszoely, Ingrid M.; Means-Powell, Julie A.; Abramson, Vandana G.; Grau, Ana M.; Gore, John C.; Yankeelov, Thomas E.
2011-09-01
Quantitative analysis of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data requires the accurate determination of the arterial input function (AIF). A novel method for obtaining the AIF is presented here and pharmacokinetic parameters derived from individual and population-based AIFs are then compared. A Philips 3.0 T Achieva MR scanner was used to obtain 20 DCE-MRI data sets from ten breast cancer patients prior to and after one cycle of chemotherapy. Using a semi-automated method to estimate the AIF from the axillary artery, we obtain the AIF for each patient, AIFind, and compute a population-averaged AIF, AIFpop. The extended standard model is used to estimate the physiological parameters using the two types of AIFs. The mean concordance correlation coefficient (CCC) for the AIFs segmented manually and by the proposed AIF tracking approach is 0.96, indicating accurate and automatic tracking of an AIF in DCE-MRI data of the breast is possible. Regarding the kinetic parameters, the CCC values for Ktrans, vp and ve as estimated by AIFind and AIFpop are 0.65, 0.74 and 0.31, respectively, based on the region of interest analysis. The average CCC values for the voxel-by-voxel analysis are 0.76, 0.84 and 0.68 for Ktrans, vp and ve, respectively. This work indicates that Ktrans and vp show good agreement between AIFpop and AIFind while there is a weak agreement on ve.
Reliable enumeration of malaria parasites in thick blood films using digital image analysis.
Frean, John A
2009-09-23
Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.
2017-06-29
Accurate Virus Quantitation Using a Scanning Transmission Electron Microscopy (STEM) Detector in a Scanning Electron Microscope Candace D Blancett1...L Norris2, Cynthia A Rossi4 , Pamela J Glass3, Mei G Sun1,* 1 Pathology Division, United States Army Medical Research Institute of Infectious...Diseases (USAMRIID), 1425 Porter Street, Fort Detrick, Maryland, 21702 2Biostatistics Division, United States Army Medical Research Institute of
Fortier, Véronique; Levesque, Ives R
2018-06-01
Phase processing impacts the accuracy of quantitative susceptibility mapping (QSM). Techniques for phase unwrapping and background removal have been proposed and demonstrated mostly in brain. In this work, phase processing was evaluated in the context of large susceptibility variations (Δχ) and negligible signal, in particular for susceptibility estimation using the iterative phase replacement (IPR) algorithm. Continuous Laplacian, region-growing, and quality-guided unwrapping were evaluated. For background removal, Laplacian boundary value (LBV), projection onto dipole fields (PDF), sophisticated harmonic artifact reduction for phase data (SHARP), variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP), regularization enabled sophisticated harmonic artifact reduction for phase data (RESHARP), and 3D quadratic polynomial field removal were studied. Each algorithm was quantitatively evaluated in simulation and qualitatively in vivo. Additionally, IPR-QSM maps were produced to evaluate the impact of phase processing on the susceptibility in the context of large Δχ with negligible signal. Quality-guided unwrapping was the most accurate technique, whereas continuous Laplacian performed poorly in this context. All background removal algorithms tested resulted in important phase inaccuracies, suggesting that techniques used for brain do not translate well to situations where large Δχ and no or low signal are expected. LBV produced the smallest errors, followed closely by PDF. Results suggest that quality-guided unwrapping should be preferred, with PDF or LBV for background removal, for QSM in regions with large Δχ and negligible signal. This reduces the susceptibility inaccuracy introduced by phase processing. Accurate background removal remains an open question. Magn Reson Med 79:3103-3113, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Geoff A.; Wymore, Ann M.; King, Andrew J.
Two genes, hgcA and hgcB, are essential for microbial mercury (Hg)-methylation. Detection and estimation of their abundance, in conjunction with Hg concentration, bioavailability and biogeochemistry is critical in determining potential hot spots of methylmercury (MeHg) generation in at-risk environments. We developed broad-range degenerate PCR primers spanning known hgcAB genes to determine the presence of both genes in diverse environments. These primers were tested against an extensive set of pure cultures with published genomes, including 13 Deltaproteobacteria, nine Firmicutes, and nine methanogenic Archaea. A distinct PCR product at the expected size was confirmed for all hgcAB+ strains tested via Sanger sequencing.more » Additionally, we developed clade-specific degenerate quantitative primers (qPCR) that targeted hgcA for each of the three dominant Hg-methylating clades. The clade-specific qPCR primers amplified hgcA from 64%, 88% and 86% of tested pure cultures of Deltaproteobacteria, Firmicutes and Archaea, respectively, and were highly specific for each clade. Amplification efficiencies and detection limits were quantified for each organism. Primer sensitivity varied among species based on sequence conservation. Finally, to begin to evaluate the utility of our primer sets in nature, we tested hgcA and hgcAB recovery from pure cultures spiked into sand and soil. These novel quantitative molecular tools designed in this study will allow for more accurate identification and quantification of the individual Hg-methylating groups of microorganisms in the environment. Here, the resulting data will be essential in developing accurate and robust predictive models of Hg-methylation potential, ideally integrating the geochemistry of Hg methylation to the microbiology and genetics of hgcAB.« less
Christensen, Geoff A.; Wymore, Ann M.; King, Andrew J.; ...
2016-07-15
Two genes, hgcA and hgcB, are essential for microbial mercury (Hg)-methylation. Detection and estimation of their abundance, in conjunction with Hg concentration, bioavailability and biogeochemistry is critical in determining potential hot spots of methylmercury (MeHg) generation in at-risk environments. We developed broad-range degenerate PCR primers spanning known hgcAB genes to determine the presence of both genes in diverse environments. These primers were tested against an extensive set of pure cultures with published genomes, including 13 Deltaproteobacteria, nine Firmicutes, and nine methanogenic Archaea. A distinct PCR product at the expected size was confirmed for all hgcAB+ strains tested via Sanger sequencing.more » Additionally, we developed clade-specific degenerate quantitative primers (qPCR) that targeted hgcA for each of the three dominant Hg-methylating clades. The clade-specific qPCR primers amplified hgcA from 64%, 88% and 86% of tested pure cultures of Deltaproteobacteria, Firmicutes and Archaea, respectively, and were highly specific for each clade. Amplification efficiencies and detection limits were quantified for each organism. Primer sensitivity varied among species based on sequence conservation. Finally, to begin to evaluate the utility of our primer sets in nature, we tested hgcA and hgcAB recovery from pure cultures spiked into sand and soil. These novel quantitative molecular tools designed in this study will allow for more accurate identification and quantification of the individual Hg-methylating groups of microorganisms in the environment. Here, the resulting data will be essential in developing accurate and robust predictive models of Hg-methylation potential, ideally integrating the geochemistry of Hg methylation to the microbiology and genetics of hgcAB.« less
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms
NASA Astrophysics Data System (ADS)
Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.
2016-10-01
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms
Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.
2016-01-01
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification. PMID:27762292
Torr, Peter; Spiridonov, Sergei E; Heritage, Stuart; Wilson, Michael J
2007-03-01
1. Despite nematodes being the most abundant animals on earth, very few animal ecologists study them, probably because of the difficulties of identifying them to species by morphological methods. 2. A group of nematodes that are important both ecologically and economically is the entomopathogenic nematodes, which play a key role in regulating soil food webs and are sold throughout the world as biological insecticides, yet for which very little is known of their population ecology. 3. A novel detection and quantification method was developed for soil nematodes using real-time polymerase chain reaction (PCR), and the technique was used to estimate numbers of two closely related species of entomopathogenic nematodes, Steinernema kraussei and S. affine in 50 soil samples from 10 sites in Scotland representing two distinct habitats (woodland and grassland). 4. There was a high degree of correlation between our molecular and traditional morphological estimates of population size and our data clearly showed that Steinernema affine occurred only in grassland areas, whereas S. kraussei was found in grassland and woodland samples to a similar degree. 5. Real-time PCR offers a rapid and accurate method of detecting individual nematode species from soil samples without the need for a specialist taxonomist, and has much potential for use in studies of nematode population ecology.
[Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].
Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie
At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms.
Mirkovic, Djordje; Stepanian, Phillip M; Kelly, Jeffrey F; Chilson, Phillip B
2016-10-20
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.
Quantitative assessment of 12-lead ECG synthesis using CAVIAR.
Scherer, J A; Rubel, P; Fayn, J; Willems, J L
1992-01-01
The objective of this study is to assess the performance of patient-specific segment-specific (PSSS) synthesis in QRST complexes using CAVIAR, a new method of the serial comparison for electrocardiograms and vectorcardiograms. A collection of 250 multi-lead recordings from the Common Standards for Quantitative Electrocardiography (CSE) diagnostic pilot study is employed. QRS and ST-T segments are independently synthesized using the PSSS algorithm so that the mean-squared error between the original and estimated waveforms is minimized. CAVIAR compares the recorded and synthesized QRS and ST-T segments and calculates the mean-quadratic deviation as a measure of error. The results of this study indicate that estimated QRS complexes are good representatives of their recorded counterparts, and the integrity of the spatial information is maintained by the PSSS synthesis process. Analysis of the ST-T segments suggests that the deviations between recorded and synthesized waveforms are considerably greater than those associated with the QRS complexes. The poorer performance of the ST-T segments is attributed to magnitude normalization of the spatial loops, low-voltage passages, and noise interference. Using the mean-quadratic deviation and CAVIAR as methods of performance assessment, this study indicates that the PSSS-synthesis algorithm accurately maintains the signal information within the 12-lead electrocardiogram.
Li, Ming Ze; Gao, Yuan Ke; Di, Xue Ying; Fan, Wen Yi
2016-03-01
The moisture content of forest surface soil is an important parameter in forest ecosystems. It is practically significant for forest ecosystem related research to use microwave remote sensing technology for rapid and accurate estimation of the moisture content of forest surface soil. With the aid of TDR-300 soil moisture content measuring instrument, the moisture contents of forest surface soils of 120 sample plots at Tahe Forestry Bureau of Daxing'anling region in Heilongjiang Province were measured. Taking the moisture content of forest surface soil as the dependent variable and the polarization decomposition parameters of C band Quad-pol SAR data as independent variables, two types of quantitative estimation models (multilinear regression model and BP-neural network model) for predicting moisture content of forest surface soils were developed. The spatial distribution of moisture content of forest surface soil on the regional scale was then derived with model inversion. Results showed that the model precision was 86.0% and 89.4% with RMSE of 3.0% and 2.7% for the multilinear regression model and the BP-neural network model, respectively. It indicated that the BP-neural network model had a better performance than the multilinear regression model in quantitative estimation of the moisture content of forest surface soil. The spatial distribution of forest surface soil moisture content in the study area was then obtained by using the BP neural network model simulation with the Quad-pol SAR data.
Horowltz, A.J.
1986-01-01
Centrifugation, settling/centrifugation, and backflush-filtration procedures have been tested for the concentration of suspended sediment from water for subsequent trace-metal analysis. Either of the first two procedures is comparable with in-line filtration and can be carried out precisely, accurately, and with a facility that makes the procedures amenable to large-scale sampling and analysis programs. There is less potential for post-sampling alteration of suspended sediment-associated metal concentrations with the centrifugation procedure because sample stabilization is accomplished more rapidly than with settling/centrifugation. Sample preservation can be achieved by chilling. Suspended sediment associated metal levels can best be determined by direct analysis but can also be estimated from the difference between a set of unfiltered-digested and filtered subsamples. However, when suspended sediment concentrations (<150 mg/L) or trace-metal levels are low, the direct analysis approach makes quantitation more accurate and precise and can be accomplished with simpler analytical procedures.
Predicting human blood viscosity in silico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedosov, Dmitry A.; Pan, Wenxiao; Caswell, Bruce
2011-07-05
Cellular suspensions such as blood are a part of living organisms and their rheological and flow characteristics determine and affect majority of vital functions. The rheological and flow properties of cell suspensions are determined by collective dynamics of cells, their structure or arrangement, cell properties and interactions. We study these relations for blood in silico using a mesoscopic particle-based method and two different models (multi-scale/low-dimensional) of red blood cells. The models yield accurate quantitative predictions of the dependence of blood viscosity on shear rate and hematocrit. We explicitly model cell aggregation interactions and demonstrate the formation of reversible rouleaux structuresmore » resulting in a tremendous increase of blood viscosity at low shear rates and yield stress, in agreement with experiments. The non-Newtonian behavior of such cell suspensions (e.g., shear thinning, yield stress) is analyzed and related to the suspension’s microstructure, deformation and dynamics of single cells. We provide the flrst quantitative estimates of normal stress differences and magnitude of aggregation forces in blood. Finally, the flexibility of the cell models allows them to be employed for quantitative analysis of a much wider class of complex fluids including cell, capsule, and vesicle suspensions.« less
Mountain Heavy Rainfall Measurement Experiments in a Subtropical Monsoon Environment
NASA Astrophysics Data System (ADS)
Jong-Dao Jou, Ben; Chi-June Jung, Ultimate; Lai, Hsiao-Wei; Feng, Lei
2014-05-01
Quantitative rainfall measurement experiments have been conducted in Taiwan area for the past 5 years (since 2008), especially over the complex terrain region. In this paper, results from these experiments will be analyzed and discussed, especially those associated with heavy rain events in the summer monsoon season. Observations from s-band polarimetric radar (SPOL of NCAR) and also x-band vertically-pointing radar are analyzed to reveal the high resolution temporal and spatial variation of precipitation structure. May and June, the Meiyu season in the area, are months with subtropical frontal rainfall events. Mesoscale convective systems, i.e., pre-frontal squall lines and frontal convective rainbands, are very active and frequently produce heavy rain events over mountain areas. Accurate quantitative precipitation measurements are needed in order to meet the requirement for landslide and flood early warning purpose. Using ground-based disdrometers and vertically-pointing radar, we have been trying to modify the quantitative precipitation estimation in the mountain region by using coastal operational radar. In this paper, the methodology applied will be presented and the potential of its application will be discussed. *corresponding author: Ben Jong-Dao Jou, jouben43@gmail.com
Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib
2016-04-15
In quantitative PET/MR imaging, attenuation correction (AC) of PET data is markedly challenged by the need of deriving accurate attenuation maps from MR images. A number of strategies have been developed for MRI-guided attenuation correction with different degrees of success. In this work, we compare the quantitative performance of three generic AC methods, including standard 3-class MR segmentation-based, advanced atlas-registration-based and emission-based approaches in the context of brain time-of-flight (TOF) PET/MRI. Fourteen patients referred for diagnostic MRI and (18)F-FDG PET/CT brain scans were included in this comparative study. For each study, PET images were reconstructed using four different attenuation maps derived from CT-based AC (CTAC) serving as reference, standard 3-class MR-segmentation, atlas-registration and emission-based AC methods. To generate 3-class attenuation maps, T1-weighted MRI images were segmented into background air, fat and soft-tissue classes followed by assignment of constant linear attenuation coefficients of 0, 0.0864 and 0.0975 cm(-1) to each class, respectively. A robust atlas-registration based AC method was developed for pseudo-CT generation using local weighted fusion of atlases based on their morphological similarity to target MR images. Our recently proposed MRI-guided maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm was employed to estimate the attenuation map from TOF emission data. The performance of the different AC algorithms in terms of prediction of bones and quantification of PET tracer uptake was objectively evaluated with respect to reference CTAC maps and CTAC-PET images. Qualitative evaluation showed that the MLAA-AC method could sparsely estimate bones and accurately differentiate them from air cavities. It was found that the atlas-AC method can accurately predict bones with variable errors in defining air cavities. Quantitative assessment of bone extraction accuracy based on Dice similarity coefficient (DSC) showed that MLAA-AC and atlas-AC resulted in DSC mean values of 0.79 and 0.92, respectively, in all patients. The MLAA-AC and atlas-AC methods predicted mean linear attenuation coefficients of 0.107 and 0.134 cm(-1), respectively, for the skull compared to reference CTAC mean value of 0.138cm(-1). The evaluation of the relative change in tracer uptake within 32 distinct regions of the brain with respect to CTAC PET images showed that the 3-class MRAC, MLAA-AC and atlas-AC methods resulted in quantification errors of -16.2 ± 3.6%, -13.3 ± 3.3% and 1.0 ± 3.4%, respectively. Linear regression and Bland-Altman concordance plots showed that both 3-class MRAC and MLAA-AC methods result in a significant systematic bias in PET tracer uptake, while the atlas-AC method results in a negligible bias. The standard 3-class MRAC method significantly underestimated cerebral PET tracer uptake. While current state-of-the-art MLAA-AC methods look promising, they were unable to noticeably reduce quantification errors in the context of brain imaging. Conversely, the proposed atlas-AC method provided the most accurate attenuation maps, and thus the lowest quantification bias. Copyright © 2016 Elsevier Inc. All rights reserved.
Markov Logic Networks for Adverse Drug Event Extraction from Text.
Natarajan, Sriraam; Bangera, Vishal; Khot, Tushar; Picado, Jose; Wazalwar, Anurag; Costa, Vitor Santos; Page, David; Caldwell, Michael
2017-05-01
Adverse drug events (ADEs) are a major concern and point of emphasis for the medical profession, government, and society. A diverse set of techniques from epidemiology, statistics, and computer science are being proposed and studied for ADE discovery from observational health data (e.g., EHR and claims data), social network data (e.g., Google and Twitter posts), and other information sources. Methodologies are needed for evaluating, quantitatively measuring, and comparing the ability of these various approaches to accurately discover ADEs. This work is motivated by the observation that text sources such as the Medline/Medinfo library provide a wealth of information on human health. Unfortunately, ADEs often result from unexpected interactions, and the connection between conditions and drugs is not explicit in these sources. Thus, in this work we address the question of whether we can quantitatively estimate relationships between drugs and conditions from the medical literature. This paper proposes and studies a state-of-the-art NLP-based extraction of ADEs from text.
Hong, Jungeui; Gresham, David
2017-11-01
Quantitative analysis of next-generation sequencing (NGS) data requires discriminating duplicate reads generated by PCR from identical molecules that are of unique origin. Typically, PCR duplicates are identified as sequence reads that align to the same genomic coordinates using reference-based alignment. However, identical molecules can be independently generated during library preparation. Misidentification of these molecules as PCR duplicates can introduce unforeseen biases during analyses. Here, we developed a cost-effective sequencing adapter design by modifying Illumina TruSeq adapters to incorporate a unique molecular identifier (UMI) while maintaining the capacity to undertake multiplexed, single-index sequencing. Incorporation of UMIs into TruSeq adapters (TrUMIseq adapters) enables identification of bona fide PCR duplicates as identically mapped reads with identical UMIs. Using TrUMIseq adapters, we show that accurate removal of PCR duplicates results in improved accuracy of both allele frequency (AF) estimation in heterogeneous populations using DNA sequencing and gene expression quantification using RNA-Seq.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
A novel 3D imaging system for strawberry phenotyping.
He, Joe Q; Harrison, Richard J; Li, Bo
2017-01-01
Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. A low cost multi-view stereo (MVS) imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique.
NASA Astrophysics Data System (ADS)
Trusiak, Maciej; Micó, Vicente; Patorski, Krzysztof; García-Monreal, Javier; Sluzewski, Lukasz; Ferreira, Carlos
2016-08-01
In this contribution we propose two Hilbert-Huang Transform based algorithms for fast and accurate single-shot and two-shot quantitative phase imaging applicable in both on-axis and off-axis configurations. In the first scheme a single fringe pattern containing information about biological phase-sample under study is adaptively pre-filtered using empirical mode decomposition based approach. Further it is phase demodulated by the Hilbert Spiral Transform aided by the Principal Component Analysis for the local fringe orientation estimation. Orientation calculation enables closed fringes efficient analysis and can be avoided using arbitrary phase-shifted two-shot Gram-Schmidt Orthonormalization scheme aided by Hilbert-Huang Transform pre-filtering. This two-shot approach is a trade-off between single-frame and temporal phase shifting demodulation. Robustness of the proposed techniques is corroborated using experimental digital holographic microscopy studies of polystyrene micro-beads and red blood cells. Both algorithms compare favorably with the temporal phase shifting scheme which is used as a reference method.
Freitas, Mirlaine R; Matias, Stella V B G; Macedo, Renato L G; Freitas, Matheus P; Venturin, Nelson
2013-09-11
Two of major weeds affecting cereal crops worldwide are Avena fatua L. (wild oat) and Lolium rigidum Gaud. (rigid ryegrass). Thus, development of new herbicides against these weeds is required; in line with this, benzoxazinones, their degradation products, and analogues have been shown to be important allelochemicals and natural herbicides. Despite earlier structure-activity studies demonstrating that hydrophobicity (log P) of aminophenoxazines correlates to phytotoxicity, our findings for a series of benzoxazinone derivatives do not show any relationship between phytotoxicity and log P nor with other two usual molecular descriptors. On the other hand, a quantitative structure-activity relationship (QSAR) analysis based on molecular graphs representing structural shape, atomic sizes, and colors to encode other atomic properties performed very accurately for the prediction of phytotoxicities of these compounds against wild oat and rigid ryegrass. Therefore, these QSAR models can be used to estimate the phytotoxicity of new congeners of benzoxazinone herbicides toward A. fatua L. and L. rigidum Gaud.
Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S
2005-05-15
Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE algorithm. The CMVE software is available upon request from the authors.
Hild, J; Gertz, C
1980-02-01
For the quantitative determination of preservatives in food, analyses were carried out by means of GLC, HPLC, and TLC according to the TAS-method. Using the alkaline extract (sample preparation see part I) the preservatives can be analysed as free acid or appropriate ester out the same GLC-column without any interference from coextractives. A fast and accurate HPLC determination can be achieved by direct injection of the alkaline extract. All preservatives were well separated and detected at a wavelength of 225 resp. 232 nm. As a quick test for the qualitative estimation the TLC (TAS) method is suggested and a suitable solvent system is proposed.
Identification of agricultural crops by computer processing of ERTS MSS data
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Cipra, J. E.
1973-01-01
Quantitative evaluation of computer-processed ERTS MSS data classifications has shown that major crop species (corn and soybeans) can be accurately identified. The classifications of satellite data over a 2000 square mile area not only covered more than 100 times the area previously covered using aircraft, but also yielded improved results through the use of temporal and spatial data in addition to the spectral information. Furthermore, training sets could be extended over far larger areas than was ever possible with aircraft scanner data. And, preliminary comparisons of acreage estimates from ERTS data and ground-based systems agreed well. The results demonstrate the potential utility of this technology for obtaining crop production information.
Kinetic characterisation of primer mismatches in allele-specific PCR: a quantitative assessment.
Waterfall, Christy M; Eisenthal, Robert; Cobb, Benjamin D
2002-12-20
A novel method of estimating the kinetic parameters of Taq DNA polymerase during rapid cycle PCR is presented. A model was constructed using a simplified sigmoid function to represent substrate accumulation during PCR in combination with the general equation describing high substrate inhibition for Michaelis-Menten enzymes. The PCR progress curve was viewed as a series of independent reactions where initial rates were accurately measured for each cycle. Kinetic parameters were obtained for allele-specific PCR (AS-PCR) amplification to examine the effect of mismatches on amplification. A high degree of correlation was obtained providing evidence of substrate inhibition as a major cause of the plateau phase that occurs in the later cycles of PCR.
Automated analysis of plethysmograms for functional studies of hemodynamics
NASA Astrophysics Data System (ADS)
Zatrudina, R. Sh.; Isupov, I. B.; Gribkov, V. Yu.
2018-04-01
The most promising method for the quantitative determination of cardiovascular tone indicators and of cerebral hemodynamics indicators is the method of impedance plethysmography. The accurate determination of these indicators requires the correct identification of the characteristic points in the thoracic impedance plethysmogram and the cranial impedance plethysmogram respectively. An algorithm for automatic analysis of these plethysmogram is presented. The algorithm is based on the hard temporal relationships between the phases of the cardiac cycle and the characteristic points of the plethysmogram. The proposed algorithm does not require estimation of initial data and selection of processing parameters. Use of the method on healthy subjects showed a very low detection error of characteristic points.
Abortion and mental health: quantitative synthesis and analysis of research published 1995-2009.
Coleman, Priscilla K
2011-09-01
Given the methodological limitations of recently published qualitative reviews of abortion and mental health, a quantitative synthesis was deemed necessary to represent more accurately the published literature and to provide clarity to clinicians. To measure the association between abortion and indicators of adverse mental health, with subgroup effects calculated based on comparison groups (no abortion, unintended pregnancy delivered, pregnancy delivered) and particular outcomes. A secondary objective was to calculate population-attributable risk (PAR) statistics for each outcome. After the application of methodologically based selection criteria and extraction rules to minimise bias, the sample comprised 22 studies, 36 measures of effect and 877 181 participants (163 831 experienced an abortion). Random effects pooled odds ratios were computed using adjusted odds ratios from the original studies and PAR statistics were derived from the pooled odds ratios. Women who had undergone an abortion experienced an 81% increased risk of mental health problems, and nearly 10% of the incidence of mental health problems was shown to be attributable to abortion. The strongest subgroup estimates of increased risk occurred when abortion was compared with term pregnancy and when the outcomes pertained to substance use and suicidal behaviour. This review offers the largest quantitative estimate of mental health risks associated with abortion available in the world literature. Calling into question the conclusions from traditional reviews, the results revealed a moderate to highly increased risk of mental health problems after abortion. Consistent with the tenets of evidence-based medicine, this information should inform the delivery of abortion services.
Discriminative confidence estimation for probabilistic multi-atlas label fusion.
Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard
2017-12-01
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.
[A new method of processing quantitative PCR data].
Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun
2003-05-01
Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.
Variation in commercial smoking mixtures containing third-generation synthetic cannabinoids.
Frinculescu, Anca; Lyall, Catherine L; Ramsey, John; Miserez, Bram
2017-02-01
Variation in ingredients (qualitative variation) and in quantity of active compounds (quantitative variation) in herbal smoking mixtures containing synthetic cannabinoids has been shown for older products. This can be dangerous to the user, as accurate and reproducible dosing is impossible. In this study, 69 packages containing third-generation cannabinoids of seven brands on the UK market in 2014 were analyzed both qualitatively and quantitatively for variation. When comparing the labels to actual active ingredients identified in the sample, only one brand was shown to be correctly labelled. The other six brands contained less, more, or ingredients other than those listed on the label. Only two brands were inconsistent, containing different active ingredients in different samples. Quantitative variation was assessed both within one package and between several packages. Within-package variation was within a 10% range for five of the seven brands, but two brands showed larger variation, up to 25% (Relative Standard Deviation). Variation between packages was significantly higher, with variation up to 38% and maximum concentration up to 2.7 times higher than the minimum concentration. Both qualitative and quantitative variation are common in smoking mixtures and endanger the user, as it is impossible to estimate the dose or to know the compound consumed when smoking commercial mixtures. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.
1999-01-01
Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.
NASA Astrophysics Data System (ADS)
Sun, Aihui; Tian, Xiaolin; Kong, Yan; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-01-01
As a lensfree imaging technique, ptychographic iterative engine (PIE) method can provide both quantitative sample amplitude and phase distributions avoiding aberration. However, it requires field of view (FoV) scanning often relying on mechanical translation, which not only slows down measuring speed, but also introduces mechanical errors decreasing both resolution and accuracy in retrieved information. In order to achieve high-accurate quantitative imaging with fast speed, digital micromirror device (DMD) is adopted in PIE for large FoV scanning controlled by on/off state coding by DMD. Measurements were implemented using biological samples as well as USAF resolution target, proving high resolution in quantitative imaging using the proposed system. Considering its fast and accurate imaging capability, it is believed the DMD based PIE technique provides a potential solution for medical observation and measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun
2015-01-30
Different protocols for calibrating electron pair distribution function (ePDF) measurements are explored and described for quantitative studies on nanomaterials. It is found that the most accurate approach to determine the camera length is to use a standard calibration sample of Au nanoparticles from the National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun
2015-02-01
We explore and describe different protocols for calibrating electron pair distribution function (ePDF) measurements for quantitative studies on nano-materials. We find the most accurate approach to determine the camera-length is to use a standard calibration sample of Au nanoparticles from National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.
Rastogi, L.; Dash, K.; Arunachalam, J.
2013-01-01
The quantitative analysis of glutathione (GSH) is important in different fields like medicine, biology, and biotechnology. Accurate quantitative measurements of this analyte have been hampered by the lack of well characterized reference standards. The proposed procedure is intended to provide an accurate and definitive method for the quantitation of GSH for reference measurements. Measurement of the stoichiometrically existing sulfur content in purified GSH offers an approach for its quantitation and calibration through an appropriate characterized reference material (CRM) for sulfur would provide a methodology for the certification of GSH quantity, that is traceable to SI (International system of units). The inductively coupled plasma optical emission spectrometry (ICP-OES) approach negates the need for any sample digestion. The sulfur content of the purified GSH is quantitatively converted into sulfate ions by microwave-assisted UV digestion in the presence of hydrogen peroxide prior to ion chromatography (IC) measurements. The measurement of sulfur by ICP-OES and IC (as sulfate) using the “high performance” methodology could be useful for characterizing primary calibration standards and certified reference materials with low uncertainties. The relative expanded uncertainties (% U) expressed at 95% confidence interval for ICP-OES analyses varied from 0.1% to 0.3%, while in the case of IC, they were between 0.2% and 1.2%. The described methods are more suitable for characterizing primary calibration standards and certifying reference materials of GSH, than for routine measurements. PMID:29403814
Information-Driven Active Audio-Visual Source Localization
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
2015-01-01
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619
Lebensohn, Ricardo A.; Zecevic, Miroslav; Knezevic, Marko; ...
2015-12-15
Here, this work presents estimations of average intragranular fluctuations of lattice rotation rates in polycrystalline materials, obtained by means of the viscoplastic self-consistent (VPSC) model. These fluctuations give a tensorial measure of the trend of misorientation developing inside each single crystal grain representing a polycrystalline aggregate. We first report details of the algorithm implemented in the VPSC code to estimate these fluctuations, which are then validated by comparison with corresponding full-field calculations. Next, we present predictions of average intragranular fluctuations of lattice rotation rates for cubic aggregates, which are rationalized by comparison with experimental evidence on annealing textures of fccmore » and bcc polycrystals deformed in tension and compression, respectively, as well as with measured intragranular misorientation distributions in a Cu polycrystal deformed in tension. The orientation-dependent and micromechanically-based estimations of intragranular misorientations that can be derived from the present implementation are necessary to formulate sound sub-models for the prediction of quantitatively accurate deformation textures, grain fragmentation, and recrystallization textures using the VPSC approach.« less
Estimation of Land Surface Fluxes and Their Uncertainty via Variational Data Assimilation Approach
NASA Astrophysics Data System (ADS)
Abdolghafoorian, A.; Farhadi, L.
2016-12-01
Accurate estimation of land surface heat and moisture fluxes as well as root zone soil moisture is crucial in various hydrological, meteorological, and agricultural applications. "In situ" measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state variables. In this work, we applied a novel approach based on the variational data assimilation (VDA) methodology to estimate land surface fluxes and soil moisture profile from the land surface states. This study accounts for the strong linkage between terrestrial water and energy cycles by coupling the dual source energy balance equation with the water balance equation through the mass flux of evapotranspiration (ET). Heat diffusion and moisture diffusion into the column of soil are adjoined to the cost function as constraints. This coupling results in more accurate prediction of land surface heat and moisture fluxes and consequently soil moisture at multiple depths with high temporal frequency as required in many hydrological, environmental and agricultural applications. One of the key limitations of VDA technique is its tendency to be ill-posed, meaning that a continuum of possibilities exists for different parameters that produce essentially identical measurement-model misfit errors. On the other hand, the value of heat and moisture flux estimation to decision-making processes is limited if reasonable estimates of the corresponding uncertainty are not provided. In order to address these issues, in this research uncertainty analysis will be performed to estimate the uncertainty of retrieved fluxes and root zone soil moisture. The assimilation algorithm is tested with a series of experiments using a synthetic data set generated by the simultaneous heat and water (SHAW) model. We demonstrate the VDA performance by comparing the (synthetic) true measurements (including profile of soil moisture and temperature, land surface water and heat fluxes, and root water uptake) with VDA estimates. In addition, the feasibility of extending the proposed approach to use remote sensing observations is tested by limiting the number of LST observations and soil moisture observations.
Rule, Geoffrey S; Clark, Zlatuse D; Yue, Bingfang; Rockwood, Alan L
2013-04-16
Stable isotope-labeled internal standards are of great utility in providing accurate quantitation in mass spectrometry (MS). An implicit assumption has been that there is no "cross talk" between signals of the internal standard and the target analyte. In some cases, however, naturally occurring isotopes of the analyte do contribute to the signal of the internal standard. This phenomenon becomes more pronounced for isotopically rich compounds, such as those containing sulfur, chlorine, or bromine, higher molecular weight compounds, and those at high analyte/internal standard concentration ratio. This can create nonlinear calibration behavior that may bias quantitative results. Here, we propose the use of a nonlinear but more accurate fitting of data for these situations that incorporates one or two constants determined experimentally for each analyte/internal standard combination and an adjustable calibration parameter. This fitting provides more accurate quantitation in MS-based assays where contributions from analyte to stable labeled internal standard signal exist. It can also correct for the reverse situation where an analyte is present in the internal standard as an impurity. The practical utility of this approach is described, and by using experimental data, the approach is compared to alternative fits.
Quantitative Ultrasound: Transition from the Laboratory to the Clinic
NASA Astrophysics Data System (ADS)
Hall, Timothy
2014-03-01
There is a long history of development and testing of quantitative methods in medical ultrasound. From the initial attempts to scan breasts with ultrasound in the early 1950's, there was a simultaneous attempt to classify tissue as benign or malignant based on the appearance of the echo signal on an oscilloscope. Since that time, there has been substantial improvement in the ultrasound systems used, the models to describe wave propagation in random media, the methods of signal detection theory, and the combination of those models and methods into parameter estimation techniques. One particularly useful measure in ultrasonics is the acoustic differential scattering cross section per unit volume in the special case of the 180° (as occurs in pulse-echo ultrasound imaging) which is known as the backscatter coefficient. The backscatter coefficient, and parameters derived from it, can be used to objectively measure quantities that are used clinically to subjectively describe ultrasound images. For example, the ``echogenicity'' (relative ultrasound image brightness) of the renal cortex is commonly compared to that of the liver. Investigating the possibility of liver disease, it is assumed the renal cortex echogenicity is normal. Investigating the kidney, it is assumed the liver echogenicity is normal. Objective measures of backscatter remove these assumptions. There is a 30-year history of accurate estimates of acoustic backscatter coefficients with laboratory systems. Twenty years ago that ability was extended to clinical imaging systems with array transducers. Recent studies involving multiple laboratories and a variety of clinical imaging systems has demonstrated system-independent estimates of acoustic backscatter coefficients in well-characterized media (agreement within about 1.5dB over about a 1-decade frequency range). Advancements that made this possible, transition of this and similar capabilities into medical practice and the prospects for quantitative image-based biomarkers will be discussed. This work was supported, in part, by NIH grants R01CA140271 and R01HD072077.
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y
2014-01-01
Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.
How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?
Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.
2010-01-01
We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774
Pirat, Bahar; Little, Stephen H.; Igo, Stephen R.; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J.; Zoghbi, William A.
2012-01-01
Objective The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. Methods We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2π r2, and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA × aliasing velocity × time velocity integral of AR/peak AR velocity. Results Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 ± 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Conclusion Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption. PMID:19168322
Blancett, Candace D; Fetterer, David P; Koistinen, Keith A; Morazzani, Elaine M; Monninger, Mitchell K; Piper, Ashley E; Kuehl, Kathleen A; Kearney, Brian J; Norris, Sarah L; Rossi, Cynthia A; Glass, Pamela J; Sun, Mei G
2017-10-01
A method for accurate quantitation of virus particles has long been sought, but a perfect method still eludes the scientific community. Electron Microscopy (EM) quantitation is a valuable technique because it provides direct morphology information and counts of all viral particles, whether or not they are infectious. In the past, EM negative stain quantitation methods have been cited as inaccurate, non-reproducible, and with detection limits that were too high to be useful. To improve accuracy and reproducibility, we have developed a method termed Scanning Transmission Electron Microscopy - Virus Quantitation (STEM-VQ), which simplifies sample preparation and uses a high throughput STEM detector in a Scanning Electron Microscope (SEM) coupled with commercially available software. In this paper, we demonstrate STEM-VQ with an alphavirus stock preparation to present the method's accuracy and reproducibility, including a comparison of STEM-VQ to viral plaque assay and the ViroCyt Virus Counter. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Rigour in quantitative research.
Claydon, Leica Sarah
2015-07-22
This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.
Distortion Correction of OCT Images of the Crystalline Lens: GRIN Approach
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-01-01
Purpose To propose a method to correct Optical Coherence Tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Methods 2-D images of 9 human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared to the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley and lens thickness shifts from the nominal data. Results Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface, in terms of RMS and peak values, with errors less than 6μm and 13μm respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8μm. Conclusions The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in 2-D, it is expected that 3-D imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations. PMID:22466105
Distortion correction of OCT images of the crystalline lens: gradient index approach.
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-05-01
To propose a method to correct optical coherence tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Two-dimensional images of nine human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared with the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley, and lens thickness shifts from the nominal data. Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface in terms of root mean square and peak values, with errors <6 and 13 μm, respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8 μm. The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in two dimension, it is expected that three-dimensional imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations.
Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui
2016-12-09
Accurate quantitation of intracellular pH (pH i ) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pH i sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pH i . Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pH i , in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF 4 :Yb 3+ , Tm 3+ UCNPs were used as pH i response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pH i value 3.0-7.0 with deviation less than 0.43. This approach shall facilitate the researches in pH i related areas and development of the intracellular drug delivery systems.
NASA Astrophysics Data System (ADS)
Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui
2016-12-01
Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0-7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems.
Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn
2012-07-01
The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are unavailable under MRM transition.
Lee, Seung-Jae; Serre, Marc L; van Donkelaar, Aaron; Martin, Randall V; Burnett, Richard T; Jerrett, Michael
2012-12-01
A better understanding of the adverse health effects of chronic exposure to fine particulate matter (PM2.5) requires accurate estimates of PM2.5 variation at fine spatial scales. Remote sensing has emerged as an important means of estimating PM2.5 exposures, but relatively few studies have compared remote-sensing estimates to those derived from monitor-based data. We evaluated and compared the predictive capabilities of remote sensing and geostatistical interpolation. We developed a space-time geostatistical kriging model to predict PM2.5 over the continental United States and compared resulting predictions to estimates derived from satellite retrievals. The kriging estimate was more accurate for locations that were about 100 km from a monitoring station, whereas the remote sensing estimate was more accurate for locations that were > 100 km from a monitoring station. Based on this finding, we developed a hybrid map that combines the kriging and satellite-based PM2.5 estimates. We found that for most of the populated areas of the continental United States, geostatistical interpolation produced more accurate estimates than remote sensing. The differences between the estimates resulting from the two methods, however, were relatively small. In areas with extensive monitoring networks, the interpolation may provide more accurate estimates, but in the many areas of the world without such monitoring, remote sensing can provide useful exposure estimates that perform nearly as well.
Estimation of species extinction: what are the consequences when total species number is unknown?
Chen, Youhua
2014-12-01
The species-area relationship (SAR) is known to overestimate species extinction but the underlying mechanisms remain unclear to a great extent. Here, I show that when total species number in an area is unknown, the SAR model exaggerates the estimation of species extinction. It is proposed that to accurately estimate species extinction caused by habitat destruction, one of the principal prerequisites is to accurately total the species numbers presented in the whole study area. One can better evaluate and compare alternative theoretical SAR models on the accurate estimation of species loss only when the exact total species number for the whole area is clear. This presents an opportunity for ecologists to simulate more research on accurately estimating Whittaker's gamma diversity for the purpose of better predicting species loss.
Thierry-Chef, Isabelle; Simon, Steven L.; Weinstock, Robert M.; Kwon, Deukwoo; Linet, Martha S.
2013-01-01
The assessment of potential benefits versus harms from mammographic examinations as described in the controversial breast cancer screening recommendations of the U.S. Preventive Task Force included limited consideration of absorbed dose to the fibroglandular tissue of the breast (glandular tissue dose), the tissue at risk for breast cancer. Epidemiological studies on cancer risks associated with diagnostic radiological examinations often lack accurate information on glandular tissue dose, and there is a clear need for better estimates of these doses. Our objective was to develop a quantitative summary of glandular tissue doses from mammography by considering sources of variation over time in key parameters including imaging protocols, x-ray target materials, voltage, filtration, incident air kerma, compressed breast thickness, and breast composition. We estimated the minimum, maximum, and mean values for glandular tissue dose for populations of exposed women within 5-year periods from 1960 to the present, with the minimum to maximum range likely including 90% to 95% of the entirety of the dose range from mammography in North America and Europe. Glandular tissue dose from a single view in mammography is presently about 2 mGy, about one-sixth the dose in the 1960s. The ratio of our estimates of maximum to minimum glandular tissue doses for average-size breasts was about 100 in the 1960s compared to a ratio of about 5 in recent years. Findings from our analysis provide quantitative information on glandular tissue doses from mammographic examinations which can be used in epidemiologic studies of breast cancer. PMID:21988547
NASA Astrophysics Data System (ADS)
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)
NASA Astrophysics Data System (ADS)
McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian
2006-03-01
To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the spatial resolution bar patterns demonstrated that the BONE (GE) and B46f (Siemens) showed higher spatial resolution compared to the STANDARD (GE) or B30f (Siemens) reconstruction algorithms typically used for routine body CT imaging. Only the sharper images were deemed clinically acceptable for the evaluation of diffuse lung disease (e.g. emphysema). Quantitative analyses of the extent of emphysema in patient data showed the percent volumes above the -950 HU threshold as 9.4% for the BONE reconstruction, 5.9% for the STANDARD reconstruction, and 4.7% for the BONE filtered images. Contrary to the practice of using standard resolution CT images for the quantitation of diffuse lung disease, these data demonstrate that a single sharp reconstruction (BONE/B46f) should be used for both the qualitative and quantitative evaluation of diffuse lung disease. The sharper reconstruction images, which are required for diagnostic interpretation, provide accurate CT numbers over the range of -1000 to +900 HU and preserve the fidelity of small structures in the reconstructed images. A filtered version of the sharper images can be accurately substituted for images reconstructed with smoother kernels for comparison to previously published results.
Effects of water on fingernail electron paramagnetic resonance dosimetry.
Zhang, Tengda; Zhao, Zhixin; Zhang, Haiying; Zhai, Hezheng; Ruan, Shuzhou; Jiao, Ling; Zhang, Wenyi
2016-09-01
Electron paramagnetic resonance (EPR) is a promising biodosimetric method, and fingernails are sensitive biomaterials to ionizing radiation. Therefore, kinetic energy released per unit mass (kerma) can be estimated by measuring the level of free radicals within fingernails, using EPR. However, to date this dosimetry has been deficient and insufficiently accurate. In the sampling processes and measurements, water plays a significant role. This paper discusses many effects of water on fingernail EPR dosimetry, including disturbance to EPR measurements and two different effects on the production of free radicals. Water that is unable to contact free radicals can promote the production of free radicals due to indirect ionizing effects. Therefore, varying water content within fingernails can lead to varying growth rates in the free radical concentration after irradiation-these two variables have a linear relationship, with a slope of 1.8143. Thus, EPR dosimetry needs to be adjusted according to the water content of the fingernails of an individual. When the free radicals are exposed to water, the eliminating effect will appear. Therefore, soaking fingernail pieces in water before irradiation, as many researchers have previously done, can cause estimation errors. In addition, nails need to be dehydrated before making accurately quantitative EPR measurements. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
Capillary red blood cell velocimetry by phase-resolved optical coherence tomography
NASA Astrophysics Data System (ADS)
Tang, Jianbo; Erdener, Sefik Evren; Fu, Buyin; Boas, David A.
2018-02-01
Quantitative measurement of blood flow velocity in capillaries is challenging due to their small size (around 5-10 μm), and the discontinuity and single-file feature of RBCs flowing in a capillary. In this work, we present a phase-resolved Optical Coherence Tomography (OCT) method for accurate measurement of the red blood cell (RBC) speed in cerebral capillaries. To account for the discontinuity of RBCs flowing in capillaries, we applied an M-mode scanning strategy that repeated A-scans at each scanning position for an extended time. As the capillary size is comparable to the OCT resolution size (3.5×3.5×3.5μm), we applied a high pass filter to remove the stationary signal component so that the phase information of the dynamic component (i.e. from the moving RBC) could be enhanced to provide an accurate estimate of the RBC axial speed. The phase-resolved OCT method accurately quantifies the axial velocity of RBC's from the phase shift of the dynamic component of the signal. We validated our measurements by RBC passage velocimetry using the signal magnitude of the same OCT time series data. These proposed method of capillary velocimetry proved to be a robust method of mapping capillary RBC speeds across the micro-vascular network.
Markerless motion estimation for motion-compensated clinical brain imaging
NASA Astrophysics Data System (ADS)
Kyme, Andre Z.; Se, Stephen; Meikle, Steven R.; Fulton, Roger R.
2018-05-01
Motion-compensated brain imaging can dramatically reduce the artifacts and quantitative degradation associated with voluntary and involuntary subject head motion during positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT). However, motion-compensated imaging protocols are not in widespread clinical use for these modalities. A key reason for this seems to be the lack of a practical motion tracking technology that allows for smooth and reliable integration of motion-compensated imaging protocols in the clinical setting. We seek to address this problem by investigating the feasibility of a highly versatile optical motion tracking method for PET, SPECT and CT geometries. The method requires no attached markers, relying exclusively on the detection and matching of distinctive facial features. We studied the accuracy of this method in 16 volunteers in a mock imaging scenario by comparing the estimated motion with an accurate marker-based method used in applications such as image guided surgery. A range of techniques to optimize performance of the method were also studied. Our results show that the markerless motion tracking method is highly accurate (<2 mm discrepancy against a benchmarking system) on an ethnically diverse range of subjects and, moreover, exhibits lower jitter and estimation of motion over a greater range than some marker-based methods. Our optimization tests indicate that the basic pose estimation algorithm is very robust but generally benefits from rudimentary background masking. Further marginal gains in accuracy can be achieved by accounting for non-rigid motion of features. Efficiency gains can be achieved by capping the number of features used for pose estimation provided that these features adequately sample the range of head motion encountered in the study. These proof-of-principle data suggest that markerless motion tracking is amenable to motion-compensated brain imaging and holds good promise for a practical implementation in clinical PET, SPECT and CT systems.
Patient-individualized boundary conditions for CFD simulations using time-resolved 3D angiography.
Boegel, Marco; Gehrisch, Sonja; Redel, Thomas; Rohkohl, Christopher; Hoelter, Philip; Doerfler, Arnd; Maier, Andreas; Kowarschik, Markus
2016-06-01
Hemodynamic simulations are of increasing interest for the assessment of aneurysmal rupture risk and treatment planning. Achievement of accurate simulation results requires the usage of several patient-individual boundary conditions, such as a geometric model of the vasculature but also individualized inflow conditions. We propose the automatic estimation of various parameters for boundary conditions for computational fluid dynamics (CFD) based on a single 3D rotational angiography scan, also showing contrast agent inflow. First the data are reconstructed, and a patient-specific vessel model can be generated in the usual way. For this work, we optimize the inflow waveform based on two parameters, the mean velocity and pulsatility. We use statistical analysis of the measurable velocity distribution in the vessel segment to estimate the mean velocity. An iterative optimization scheme based on CFD and virtual angiography is utilized to estimate the inflow pulsatility. Furthermore, we present methods to automatically determine the heart rate and synchronize the inflow waveform to the patient's heart beat, based on time-intensity curves extracted from the rotational angiogram. This will result in a patient-individualized inflow velocity curve. The proposed methods were evaluated on two clinical datasets. Based on the vascular geometries, synthetic rotational angiography data was generated to allow a quantitative validation of our approach against ground truth data. We observed an average error of approximately [Formula: see text] for the mean velocity, [Formula: see text] for the pulsatility. The heart rate was estimated very precisely with an average error of about [Formula: see text], which corresponds to about 6 ms error for the duration of one cardiac cycle. Furthermore, a qualitative comparison of measured time-intensity curves from the real data and patient-specific simulated ones shows an excellent match. The presented methods have the potential to accurately estimate patient-specific boundary conditions from a single dedicated rotational scan.
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
Doblas, Ana; Sánchez-Ortiga, Emilio; Martínez-Corral, Manuel; Saavedra, Genaro; Garcia-Sucerquia, Jorge
2014-04-01
The advantages of using a telecentric imaging system in digital holographic microscopy (DHM) to study biological specimens are highlighted. To this end, the performances of nontelecentric DHM and telecentric DHM are evaluated from the quantitative phase imaging (QPI) point of view. The evaluated stability of the microscope allows single-shot QPI in DHM by using telecentric imaging systems. Quantitative phase maps of a section of the head of the drosophila melanogaster fly and of red blood cells are obtained via single-shot DHM with no numerical postprocessing. With these maps we show that the use of telecentric DHM provides larger field of view for a given magnification and permits more accurate QPI measurements with less number of computational operations.
Angusti, Tiziana; Pilati, Emanuela; Parente, Antonella; Carignola, Renato; Manfredi, Matteo; Cauda, Simona; Pizzigati, Elena; Dubreuil, Julien; Giammarile, Francesco; Podio, Valerio; Skanjeti, Andrea
2017-09-01
The aim of this study was the assessment of semi-quantified salivary gland dynamic scintigraphy (SGdS) parameters independently and in an integrated way in order to predict primary Sjögren's syndrome (pSS). Forty-six consecutive patients (41 females; age 61 ± 11 years) with sicca syndrome were studied by SGdS after injection of 200 MBq of pertechnetate. In sixteen patients, pSS was diagnosed, according to American-European Consensus Group criteria (AECGc). Semi-quantitative parameters (uptake (UP) and excretion fraction (EF)) were obtained for each gland. ROC curves were used to determine the best cut-off value. The area under the curve (AUC) was used to estimate the accuracy of each semi-quantitative analysis. To assess the correlation between scintigraphic results and disease severity, semi-quantitative parameters were plotted versus Sjögren's syndrome disease activity index (ESSDAI). A nomogram was built to perform an integrated evaluation of all the scintigraphic semi-quantitative data. Both UP and EF of salivary glands were significantly lower in pSS patients compared to those in non-pSS (p < 0.001). ROC curve showed significantly large AUC for both the parameters (p < 0.05). Parotid UP and submandibular EF, assessed by univariated and multivariate logistic regression, showed a significant and independent correlation with pSS diagnosis (p value <0.05). No correlation was found between SGdS semi-quantitative parameters and ESSDAI. The proposed nomogram accuracy was 87%. SGdS is an accurate and reproducible tool for the diagnosis of pSS. ESSDAI was not shown to be correlated with SGdS data. SGdS should be the first-line imaging technique in patients with suspected pSS.
Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeo, U. J.; Supple, J. R.; Franich, R. D.
2013-10-15
Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L.more » Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7.5 mm across algorithms for scenarios I to III, respectively. The greatest accuracy was exhibited by the original Horn and Schunck optical flow algorithm. In this case, for scenario III (erased FMs not contributing to driving the DIR calculation), the mean error was half that of the modified demons algorithm (which exhibited the greatest error), across all deformations. Some algorithms failed to reproduce the geometry at all, while others accurately deformed high contrast features but not low-contrast regions—indicating poor interpolation between landmarks.Conclusions: The accuracy of DIR algorithms was quantitatively evaluated using a tissue equivalent, mass, and density conserving DEFGEL phantom. For the model studied, optical flow algorithms performed better than demons algorithms, with the original Horn and Schunck performing best. The degree of error is influenced more by the magnitude of displacement than the geometric complexity of the deformation. As might be expected, deformation is estimated less accurately for low-contrast regions than for high-contrast features, and the method presented here allows quantitative analysis of the differences. The evaluation of registration accuracy through observation of the same high contrast features that drive the DIR calculation is shown to be circular and hence misleading.« less
Bayesian B-spline mapping for dynamic quantitative traits.
Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong
2012-04-01
Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.
Lim, Eelin L.; Tomita, Aoy V.; Thilly, William G.; Polz, Martin F.
2001-01-01
A novel quantitative PCR (QPCR) approach, which combines competitive PCR with constant-denaturant capillary electrophoresis (CDCE), was adapted for enumerating microbial cells in environmental samples using the marine nanoflagellate Cafeteria roenbergensis as a model organism. Competitive PCR has been used successfully for quantification of DNA in environmental samples. However, this technique is labor intensive, and its accuracy is dependent on an internal competitor, which must possess the same amplification efficiency as the target yet can be easily discriminated from the target DNA. The use of CDCE circumvented these problems, as its high resolution permitted the use of an internal competitor which differed from the target DNA fragment by a single base and thus ensured that both sequences could be amplified with equal efficiency. The sensitivity of CDCE also enabled specific and precise detection of sequences over a broad range of concentrations. The combined competitive QPCR and CDCE approach accurately enumerated C. roenbergensis cells in eutrophic, coastal seawater at abundances ranging from approximately 10 to 104 cells ml−1. The QPCR cell estimates were confirmed by fluorescent in situ hybridization counts, but estimates of samples with <50 cells ml−1 by QPCR were less variable. This novel approach extends the usefulness of competitive QPCR by demonstrating its ability to reliably enumerate microorganisms at a range of environmentally relevant cell concentrations in complex aquatic samples. PMID:11525983
Linking interseismic deformation with coseismic slip using dynamic rupture simulations
NASA Astrophysics Data System (ADS)
Yang, H.; He, B.; Weng, H.
2017-12-01
The largest earthquakes on earth occur at subduction zones, sometimes accompanied by devastating tsunamis. Reducing losses from megathrust earthquakes and tsunami demands accurate estimate of rupture scenarios for future earthquakes. Interseismic locking distribution derived from geodetic observations is often used to qualitatively evaluate future earthquake potential. However, how to quantitatively estimate the coseismic slip from the locking distribution remains challenging. Here we derive the coseismic rupture process of the 2012 Mw 7.6 Nicoya, Costa Rica, earthquake from interseismic locking distribution using spontaneous rupture simulation. We construct a three-dimensional elastic medium with a curved fault, which is governed by the linear slip-weakening law. The initial stress on the fault is set based on the build-up stress inferred from locking and the dynamic friction coefficient from fast-speed sliding experiments. Our numerical results of coseismic slip distribution, moment rate function and final earthquake moment are well consistent with those derived from seismic and geodetic observations. Furthermore, we find that the epicentral locations affect rupture scenarios and may lead to various sizes of earthquakes given the heterogeneous stress distribution. In the Nicoya region, less than half of rupture initiation regions where the locking degree is greater than 0.6 can develop into large earthquakes (Mw > 7.2). The results of location-dependent earthquake magnitudes underscore the necessity of conducting a large number of simulations to quantitatively evaluate seismic hazard from the interseismic locking models.
NASA Astrophysics Data System (ADS)
Bravo, Jaime J.; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2016-06-01
Quantification of multiple fluorescence markers during neurosurgery has the potential to provide complementary contrast mechanisms between normal and malignant tissues, and one potential combination involves fluorescein sodium (FS) and aminolevulinic acid-induced protoporphyrin IX (PpIX). We focus on the interpretation of reflectance spectra containing contributions from elastically scattered (reflected) photons as well as fluorescence emissions from a strong fluorophore (i.e., FS). A model-based approach to extract μa and μs‧ in the presence of FS emission is validated in optical phantoms constructed with Intralipid (1% to 2% lipid) and whole blood (1% to 3% volume fraction), over a wide range of FS concentrations (0 to 1000 μg/ml). The results show that modeling reflectance as a combination of elastically scattered light and attenuation-corrected FS-based emission yielded more accurate tissue parameter estimates when compared with a nonmodified reflectance model, with reduced maximum errors for blood volume (22% versus 90%), microvascular saturation (21% versus 100%), and μs‧ (13% versus 207%). Additionally, quantitative PpIX fluorescence sampled in the same phantom as FS showed significant differences depending on the reflectance model used to estimate optical properties (i.e., maximum error 29% versus 86%). These data represent a first step toward using quantitative optical spectroscopy to guide surgeries through simultaneous assessment of FS and PpIX.
Harbert, Robert S; Nixon, Kevin C
2015-08-01
• Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.
Detection of blur artifacts in histopathological whole-slide images of endomyocardial biopsies.
Hang Wu; Phan, John H; Bhatia, Ajay K; Cundiff, Caitlin A; Shehata, Bahig M; Wang, May D
2015-01-01
Histopathological whole-slide images (WSIs) have emerged as an objective and quantitative means for image-based disease diagnosis. However, WSIs may contain acquisition artifacts that affect downstream image feature extraction and quantitative disease diagnosis. We develop a method for detecting blur artifacts in WSIs using distributions of local blur metrics. As features, these distributions enable accurate classification of WSI regions as sharp or blurry. We evaluate our method using over 1000 portions of an endomyocardial biopsy (EMB) WSI. Results indicate that local blur metrics accurately detect blurry image regions.
2010-01-01
High-throughput genotype data can be used to identify genes important for local adaptation in wild populations, phenotypes in lab stocks, or disease-related traits in human medicine. Here we advance microarray-based genotyping for population genomics with Restriction Site Tiling Analysis. The approach simultaneously discovers polymorphisms and provides quantitative genotype data at 10,000s of loci. It is highly accurate and free from ascertainment bias. We apply the approach to uncover genomic differentiation in the purple sea urchin. PMID:20403197
Game, Madhuri D.; Gabhane, K. B.; Sakarkar, D. M.
2010-01-01
A simple, accurate and precise spectrophotometric method has been developed for simultaneous estimation of clopidogrel bisulphate and aspirin by employing first order derivative zero crossing method. The first order derivative absorption at 232.5 nm (zero cross point of aspirin) was used for clopidogrel bisulphate and 211.3 nm (zero cross point of clopidogrel bisulphate) for aspirin.Both the drugs obeyed linearity in the concentration range of 5.0 μg/ml to 25.0 μg/ml (correlation coefficient r2<1). No interference was found between both determined constituents and those of matrix. The method was validated statistically and recovery studies were carried out to confirm the accuracy of the method. PMID:21969765
The Missing Response to Selection in the Wild.
Pujol, Benoit; Blanchet, Simon; Charmantier, Anne; Danchin, Etienne; Facon, Benoit; Marrot, Pascal; Roux, Fabrice; Scotti, Ivan; Teplitsky, Céline; Thomson, Caroline E; Winney, Isabel
2018-05-01
Although there are many examples of contemporary directional selection, evidence for responses to selection that match predictions are often missing in quantitative genetic studies of wild populations. This is despite the presence of genetic variation and selection pressures - theoretical prerequisites for the response to selection. This conundrum can be explained by statistical issues with accurate parameter estimation, and by biological mechanisms that interfere with the response to selection. These biological mechanisms can accelerate or constrain this response. These mechanisms are generally studied independently but might act simultaneously. We therefore integrated these mechanisms to explore their potential combined effect. This has implications for explaining the apparent evolutionary stasis of wild populations and the conservation of wildlife. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Results and Validation of MODIS Aerosol Retrievals Over Land and Ocean
NASA Technical Reports Server (NTRS)
Remer, Lorraine; Einaudi, Franco (Technical Monitor)
2001-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.
Results and Validation of MODIS Aerosol Retrievals over Land and Ocean
NASA Technical Reports Server (NTRS)
Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Ichoku, C.; Chu, D. A.; Mattoo, S.; Levy, R.; Martins, J. V.; Li, R.-R.; Einaudi, Franco (Technical Monitor)
2000-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.
Lambert, Nathaniel D.; Pankratz, V. Shane; Larrabee, Beth R.; Ogee-Nwankwo, Adaeze; Chen, Min-hsin; Icenogle, Joseph P.
2014-01-01
Rubella remains a social and economic burden due to the high incidence of congenital rubella syndrome (CRS) in some countries. For this reason, an accurate and efficient high-throughput measure of antibody response to vaccination is an important tool. In order to measure rubella-specific neutralizing antibodies in a large cohort of vaccinated individuals, a high-throughput immunocolorimetric system was developed. Statistical interpolation models were applied to the resulting titers to refine quantitative estimates of neutralizing antibody titers relative to the assayed neutralizing antibody dilutions. This assay, including the statistical methods developed, can be used to assess the neutralizing humoral immune response to rubella virus and may be adaptable for assessing the response to other viral vaccines and infectious agents. PMID:24391140
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
NASA Astrophysics Data System (ADS)
Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo
This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.
Zhang, Li; Wu, Yuhua; Wu, Gang; Cao, Yinglong; Lu, Changming
2014-10-01
Plasmid calibrators are increasingly applied for polymerase chain reaction (PCR) analysis of genetically modified organisms (GMOs). To evaluate the commutability between plasmid DNA (pDNA) and genomic DNA (gDNA) as calibrators, a plasmid molecule, pBSTopas, was constructed, harboring a Topas 19/2 event-specific sequence and a partial sequence of the rapeseed reference gene CruA. Assays of the pDNA showed similar limits of detection (five copies for Topas 19/2 and CruA) and quantification (40 copies for Topas 19/2 and 20 for CruA) as those for the gDNA. Comparisons of plasmid and genomic standard curves indicated that the slopes, intercepts, and PCR efficiency for pBSTopas were significantly different from CRM Topas 19/2 gDNA for quantitative analysis of GMOs. Three correction methods were used to calibrate the quantitative analysis of control samples using pDNA as calibrators: model a, or coefficient value a (Cva); model b, or coefficient value b (Cvb); and the novel model c or coefficient formula (Cf). Cva and Cvb gave similar estimated values for the control samples, and the quantitative bias of the low concentration sample exceeded the acceptable range within ±25% in two of the four repeats. Using Cfs to normalize the Ct values of test samples, the estimated values were very close to the reference values (bias -13.27 to 13.05%). In the validation of control samples, model c was more appropriate than Cva or Cvb. The application of Cf allowed pBSTopas to substitute for Topas 19/2 gDNA as a calibrator to accurately quantify the GMO.
Low Reynolds number wind tunnel measurements - The importance of being earnest
NASA Technical Reports Server (NTRS)
Mueller, Thomas J.; Batill, Stephen M.; Brendel, Michael; Perry, Mark L.; Bloch, Diane R.
1986-01-01
A method for obtaining two-dimensional aerodynamic force coefficients at low Reynolds numbers using a three-component external platform balance is presented. Regardless of method, however, the importance of understanding the possible influence of the test facility and instrumentation on the final results cannot be overstated. There is an uncertainty in the ability of the facility to simulate a two-dimensional flow environment due to the confinement effect of the wind tunnel and the method used to mount the airfoil. Additionally, the ability of the instrumentation to accurately measure forces and pressures has an associated uncertainty. This paper focuses on efforts taken to understand the errors introduced by the techniques and apparatus used at the University of Notre Dame, and, the importance of making an earnest estimate of the uncertainty. Although quantitative estimates of facility induced errors are difficult to obtain, the uncertainty in measured results can be handled in a straightforward manner and provide the experimentalist, and others, with a basis to evaluate experimental results.
Stimfit: quantifying electrophysiological data with Python
Guzman, Segundo J.; Schlögl, Alois; Schmidt-Hieber, Christoph
2013-01-01
Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals. PMID:24600389
Nonlocal maximum likelihood estimation method for denoising multiple-coil magnetic resonance images.
Rajan, Jeny; Veraart, Jelle; Van Audekerke, Johan; Verhoye, Marleen; Sijbers, Jan
2012-12-01
Effective denoising is vital for proper analysis and accurate quantitative measurements from magnetic resonance (MR) images. Even though many methods were proposed to denoise MR images, only few deal with the estimation of true signal from MR images acquired with phased-array coils. If the magnitude data from phased array coils are reconstructed as the root sum of squares, in the absence of noise correlations and subsampling, the data is assumed to follow a non central-χ distribution. However, when the k-space is subsampled to increase the acquisition speed (as in GRAPPA like methods), noise becomes spatially varying. In this note, we propose a method to denoise multiple-coil acquired MR images. Both the non central-χ distribution and the spatially varying nature of the noise is taken into account in the proposed method. Experiments were conducted on both simulated and real data sets to validate and to demonstrate the effectiveness of the proposed method. Copyright © 2012 Elsevier Inc. All rights reserved.
Baston, David S.; Denison, Michael S.
2011-01-01
The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. PMID:21238730
Occupational exposure decisions: can limited data interpretation training help improve accuracy?
Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul
2009-06-01
Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
NASA Astrophysics Data System (ADS)
Giap, Huan Bosco
Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an ^{131}I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of -16.3% to 4.4%. Volume quantitation errors ranged from -4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3 -D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues.
Accurately estimating PSF with straight lines detected by Hough transform
NASA Astrophysics Data System (ADS)
Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong
2018-04-01
This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.
Measurement of alpha particle energy using windowless electret ion chambers.
Dua, S K; Kotrappa, P; Srivastava, R; Ebadian, M A; Stieff, L R
2002-10-01
Electret ion chambers are inexpensive, lightweight, robust, commercially available, passive, charge-integrating devices for accurate measurement of different ionizing radiations. In an earlier work a chamber of dimensions larger than the range of alpha particles having aluminized Mylar windows of different thickness was used for measurement of alpha radiation. Correlation between electret mid-point voltage, alpha particle energy, and response was developed and it was shown that this chamber could be used for estimating the effective energy of an unknown alpha source. In the present study, the electret ion chamber is used in the windowless mode so that the alpha particles dissipate their entire energy inside the volume, and the alpha particle energy is determined from the first principles. This requires that alpha disintegration rate be accurately known or measured by an alternate method. The measured energies were within 1 to 4% of the true values for different sources (230Th, 237Np, 239Pu, 241Am, and 224Cm). This method finds application in quantitative determination of alpha energy absorbed in thin membrane and, hence, the absorbed dose.
Plant Genome Size Research: A Field In Focus
BENNETT, M. D.; LEITCH, I. J.
2005-01-01
This Special Issue contains 18 papers arising from presentations at the Second Plant Genome Size Workshop and Discussion Meeting (hosted by the Royal Botanic Gardens, Kew, 8–12 September, 2003). This preface provides an overview of these papers, setting their key contents in the broad framework of this highly active field. It also highlights a few overarching issues with wide biological impact or interest, including (1) the need to unify terminology relating to C-value and genome size, (2) the ongoing quest for accurate gold standards for accurate plant genome size estimation, (3) how knowledge of species' DNA amounts has increased in recent years, (4) the existence, causes and significance of intraspecific variation, (5) recent progress in understanding the mechanisms and evolutionary patterns of genome size change, and (6) the impact of genome size knowledge on related biological activities such as genetic fingerprinting and quantitative genetics. The paper offers a vision of how increased knowledge and understanding of genome size will contribute to holisitic genomic studies in both plants and animals in the next decade. PMID:15596455
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Modi, Ketan Pravinbhai; Patel, Natvarlal Manilal; Goyal, Ramesh Kishorilal
2008-03-01
A selective, precise, and accurate high-performance thin-layer chromatographic (HPTLC) method has been developed for the analysis of L-dopa in Mucuna pruriens seed extract and its formulations. The method involves densitometric evaluation of L-dopa after resolving it by HPTLC on silica gel plates with n-butanol-acetic acid-water (4.0+1.0+1.0, v/v) as the mobile phase. Densitometric analysis of L-dopa was carried out in the absorbance mode at 280 nm. The relationship between the concentration of L-dopa and corresponding peak areas was found to be linear in the range of 100 to 1200 ng/spot. The method was validated for precision (inter and intraday), repeatability, and accuracy. Mean recovery was 100.30%. The relative standard deviation (RSD) values of the precision were found to be in the range 0.64-1.52%. In conclusion, the proposed TLC method was found to be precise, specific and accurate and can be used for identification and quantitative determination of L-dopa in herbal extract and its formulations.
A Smoluchowski model of crystallization dynamics of small colloidal clusters
NASA Astrophysics Data System (ADS)
Beltran-Villegas, Daniel J.; Sehgal, Ray M.; Maroudas, Dimitrios; Ford, David M.; Bevan, Michael A.
2011-10-01
We investigate the dynamics of colloidal crystallization in a 32-particle system at a fixed value of interparticle depletion attraction that produces coexisting fluid and solid phases. Free energy landscapes (FELs) and diffusivity landscapes (DLs) are obtained as coefficients of 1D Smoluchowski equations using as order parameters either the radius of gyration or the average crystallinity. FELs and DLs are estimated by fitting the Smoluchowski equations to Brownian dynamics (BD) simulations using either linear fits to locally initiated trajectories or global fits to unbiased trajectories using Bayesian inference. The resulting FELs are compared to Monte Carlo Umbrella Sampling results. The accuracy of the FELs and DLs for modeling colloidal crystallization dynamics is evaluated by comparing mean first-passage times from BD simulations with analytical predictions using the FEL and DL models. While the 1D models accurately capture dynamics near the free energy minimum fluid and crystal configurations, predictions near the transition region are not quantitatively accurate. A preliminary investigation of ensemble averaged 2D order parameter trajectories suggests that 2D models are required to capture crystallization dynamics in the transition region.
Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.
Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C
2011-03-01
Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Evaluation of Amino Acid and Energy Utilization in Feedstuff for Swine and Poultry Diets
Kong, C.; Adeola, O.
2014-01-01
An accurate feed formulation is essential for optimizing feed efficiency and minimizing feed cost for swine and poultry production. Because energy and amino acid (AA) account for the major cost of swine and poultry diets, a precise determination of the availability of energy and AA in feedstuffs is essential for accurate diet formulations. Therefore, the methodology for determining the availability of energy and AA should be carefully selected. The total collection and index methods are 2 major procedures for estimating the availability of energy and AA in feedstuffs for swine and poultry diets. The total collection method is based on the laborious production of quantitative records of feed intake and output, whereas the index method can avoid the laborious work, but greatly relies on accurate chemical analysis of index compound. The direct method, in which the test feedstuff in a diet is the sole source of the component of interest, is widely used to determine the digestibility of nutritional components in feedstuffs. In some cases, however, it may be necessary to formulate a basal diet and a test diet in which a portion of the basal diet is replaced by the feed ingredient to be tested because of poor palatability and low level of the interested component in the test ingredients. For the digestibility of AA, due to the confounding effect on AA composition of protein in feces by microorganisms in the hind gut, ileal digestibility rather than fecal digestibility has been preferred as the reliable method for estimating AA digestibility. Depending on the contribution of ileal endogenous AA losses in the ileal digestibility calculation, ileal digestibility estimates can be expressed as apparent, standardized, and true ileal digestibility, and are usually determined using the ileal cannulation method for pigs and the slaughter method for poultry. Among these digestibility estimates, the standardized ileal AA digestibility that corrects apparent ileal digestibility for basal endogenous AA losses, provides appropriate information for the formulation of swine and poultry diets. The total quantity of energy in feedstuffs can be partitioned into different components including gross energy (GE), digestible energy (DE), metabolizable energy (ME), and net energy based on the consideration of sequential energy losses during digestion and metabolism from GE in feeds. For swine, the total collection method is suggested for determining DE and ME in feedstuffs whereas for poultry the classical ME assay and the precision-fed method are applicable. Further investigation for the utilization of ME may be conducted by measuring either heat production or energy retention using indirect calorimetry or comparative slaughter method, respectively. This review provides information on the methodology used to determine accurate estimates of AA and energy availability for formulating swine and poultry diets. PMID:25050031
Evaluation of amino Acid and energy utilization in feedstuff for Swine and poultry diets.
Kong, C; Adeola, O
2014-07-01
An accurate feed formulation is essential for optimizing feed efficiency and minimizing feed cost for swine and poultry production. Because energy and amino acid (AA) account for the major cost of swine and poultry diets, a precise determination of the availability of energy and AA in feedstuffs is essential for accurate diet formulations. Therefore, the methodology for determining the availability of energy and AA should be carefully selected. The total collection and index methods are 2 major procedures for estimating the availability of energy and AA in feedstuffs for swine and poultry diets. The total collection method is based on the laborious production of quantitative records of feed intake and output, whereas the index method can avoid the laborious work, but greatly relies on accurate chemical analysis of index compound. The direct method, in which the test feedstuff in a diet is the sole source of the component of interest, is widely used to determine the digestibility of nutritional components in feedstuffs. In some cases, however, it may be necessary to formulate a basal diet and a test diet in which a portion of the basal diet is replaced by the feed ingredient to be tested because of poor palatability and low level of the interested component in the test ingredients. For the digestibility of AA, due to the confounding effect on AA composition of protein in feces by microorganisms in the hind gut, ileal digestibility rather than fecal digestibility has been preferred as the reliable method for estimating AA digestibility. Depending on the contribution of ileal endogenous AA losses in the ileal digestibility calculation, ileal digestibility estimates can be expressed as apparent, standardized, and true ileal digestibility, and are usually determined using the ileal cannulation method for pigs and the slaughter method for poultry. Among these digestibility estimates, the standardized ileal AA digestibility that corrects apparent ileal digestibility for basal endogenous AA losses, provides appropriate information for the formulation of swine and poultry diets. The total quantity of energy in feedstuffs can be partitioned into different components including gross energy (GE), digestible energy (DE), metabolizable energy (ME), and net energy based on the consideration of sequential energy losses during digestion and metabolism from GE in feeds. For swine, the total collection method is suggested for determining DE and ME in feedstuffs whereas for poultry the classical ME assay and the precision-fed method are applicable. Further investigation for the utilization of ME may be conducted by measuring either heat production or energy retention using indirect calorimetry or comparative slaughter method, respectively. This review provides information on the methodology used to determine accurate estimates of AA and energy availability for formulating swine and poultry diets.
Using GPS To Teach More Than Accurate Positions.
ERIC Educational Resources Information Center
Johnson, Marie C.; Guth, Peter L.
2002-01-01
Undergraduate science majors need practice in critical thinking, quantitative analysis, and judging whether their calculated answers are physically reasonable. Develops exercises using handheld Global Positioning System (GPS) receivers. Reinforces students' abilities to think quantitatively, make realistic "back of the envelope"…
Quantitative analysis of time-resolved microwave conductivity data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, Obadiah G.; Moore, David T.; Li, Zhen
Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less
Quantitative analysis of time-resolved microwave conductivity data
Reid, Obadiah G.; Moore, David T.; Li, Zhen; ...
2017-11-10
Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less
NASA Astrophysics Data System (ADS)
D'Angelo, Paola; Migliorati, Valentina; Mancini, Giordano; Barone, Vincenzo; Chillemi, Giovanni
2008-02-01
The structural and dynamic properties of the solvated Hg2+ ion in aqueous solution have been investigated by a combined experimental-theoretical approach employing x-ray absorption spectroscopy and molecular dynamics (MD) simulations. This method allows one to perform a quantitative analysis of the x-ray absorption near-edge structure (XANES) spectra of ionic solutions using a proper description of the thermal and structural fluctuations. XANES spectra have been computed starting from the MD trajectory, without carrying out any minimization in the structural parameter space. The XANES experimental data are accurately reproduced by a first-shell heptacoordinated cluster only if the second hydration shell is included in the calculations. These results confirm at the same time the existence of a sevenfold first hydration shell for the Hg2+ ion in aqueous solution and the reliability of the potentials used in the MD simulations. The combination of MD and XANES is found to be very helpful to get important new insights into the quantitative estimation of structural properties of disordered systems.
Comparison of five methods for determination of total plasma protein concentration.
Okutucu, Burcu; Dinçer, Ayşşe; Habib, Omer; Zihnioglu, Figen
2007-08-01
Quantitation of exact total protein content is often a key step and is common to many applications in general biochemistry research and routine clinical laboratory practice. Before embarking on any type of protein analysis, particularly comparative techniques, it is important to accurately quantitate the amount of protein in the sample. In order to assess the quality of total protein estimation results, five methods were tested and were applied to the same pooled plasma sample. For this aim, Bradford (Coomassie Brilliant Blue), Lowry (Folin-Ciocalteau), Biüret, Pesce and Strande (Ponceau-S/TCA), and modified method of Schaffner-Weismann (Amido Black 10B) were used. The last two methods employ simultaneous precipitation of proteins with the acid containing dye solutions followed by dissolution of precipitate in a NaOH solution. It is shown that each assay has advantages and disadvantages relative to sensitivity, ease of performance, acceptance in literature, accuracy and reproducibility/coefficient of variation. All of the methods tested show a CV %<6. Besides pooled plasma, a known concentration of human serum albumin was also analyzed and discussed by means of standardization of plasma total protein content.
NASA Technical Reports Server (NTRS)
Patel, R. V.; Toda, M.; Sridhar, B.
1977-01-01
In connection with difficulties concerning an accurate mathematical representation of a linear quadratic state feedback (LQSF) system, it is often necessary to investigate the robustness (stability) of an LQSF design in the presence of system uncertainty and obtain some quantitative measure of the perturbations which such a design can tolerate. A study is conducted concerning the problem of expressing the robustness property of an LQSF design quantitatively in terms of bounds on the perturbations (modeling errors or parameter variations) in the system matrices. Bounds are obtained for the general case of nonlinear, time-varying perturbations. It is pointed out that most of the presented results are readily applicable to practical situations for which a designer has estimates of the bounds on the system parameter perturbations. Relations are provided which help the designer to select appropriate weighting matrices in the quadratic performance index to attain a robust design. The developed results are employed in the design of an autopilot logic for the flare maneuver of the Augmentor Wing Jet STOL Research Aircraft.
Challenges to quantitative applications of Landsat observations for the urban thermal environment.
Chen, Feng; Yang, Song; Yin, Kai; Chan, Paul
2017-09-01
Since the launch of its first satellite in 1972, the Landsat program has operated continuously for more than forty years. A large data archive collected by the Landsat program significantly benefits both the academic community and society. Thermal imagery from Landsat sensors, provided with relatively high spatial resolution, is suitable for monitoring urban thermal environment. Growing use of Landsat data in monitoring urban thermal environment is demonstrated by increasing publications on this subject, especially over the last decade. Urban thermal environment is usually delineated by land surface temperature (LST). However, the quantitative and accurate estimation of LST from Landsat data is still a challenge, especially for urban areas. This paper will discuss the main challenges for urban LST retrieval, including urban surface emissivity, atmospheric correction, radiometric calibration, and validation. In addition, we will discuss general challenges confronting the continuity of quantitative applications of Landsat observations. These challenges arise mainly from the scan line corrector failure of the Landsat 7 ETM+ and channel differences among sensors. Based on these investigations, the concerns are to: (1) show general users the limitation and possible uncertainty of the retrieved urban LST from the single thermal channel of Landsat sensors; (2) emphasize efforts which should be done for the quantitative applications of Landsat data; and (3) understand the potential challenges for the continuity of Landsat observation (i.e., thermal infrared) for global change monitoring, while several climate data record programs being in progress. Copyright © 2017. Published by Elsevier B.V.
Quantifying Golgi structure using EM: combining volume-SEM and stereology for higher throughput.
Ferguson, Sophie; Steyer, Anna M; Mayhew, Terry M; Schwab, Yannick; Lucocq, John Milton
2017-06-01
Investigating organelles such as the Golgi complex depends increasingly on high-throughput quantitative morphological analyses from multiple experimental or genetic conditions. Light microscopy (LM) has been an effective tool for screening but fails to reveal fine details of Golgi structures such as vesicles, tubules and cisternae. Electron microscopy (EM) has sufficient resolution but traditional transmission EM (TEM) methods are slow and inefficient. Newer volume scanning EM (volume-SEM) methods now have the potential to speed up 3D analysis by automated sectioning and imaging. However, they produce large arrays of sections and/or images, which require labour-intensive 3D reconstruction for quantitation on limited cell numbers. Here, we show that the information storage, digital waste and workload involved in using volume-SEM can be reduced substantially using sampling-based stereology. Using the Golgi as an example, we describe how Golgi populations can be sensed quantitatively using single random slices and how accurate quantitative structural data on Golgi organelles of individual cells can be obtained using only 5-10 sections/images taken from a volume-SEM series (thereby sensing population parameters and cell-cell variability). The approach will be useful in techniques such as correlative LM and EM (CLEM) where small samples of cells are treated and where there may be variable responses. For Golgi study, we outline a series of stereological estimators that are suited to these analyses and suggest workflows, which have the potential to enhance the speed and relevance of data acquisition in volume-SEM.
Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang
2016-01-01
Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists. PMID:27548183
Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang
2016-08-19
Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists.
Kim, Do-Won; Lee, Seung-Hwan; Shim, Miseon; Im, Chang-Hwan
2017-01-01
Precise diagnosis of psychiatric diseases and a comprehensive assessment of a patient's symptom severity are important in order to establish a successful treatment strategy for each patient. Although great efforts have been devoted to searching for diagnostic biomarkers of schizophrenia over the past several decades, no study has yet investigated how accurately these biomarkers are able to estimate an individual patient's symptom severity. In this study, we applied electrophysiological biomarkers obtained from electroencephalography (EEG) analyses to an estimation of symptom severity scores of patients with schizophrenia. EEG signals were recorded from 23 patients while they performed a facial affect discrimination task. Based on the source current density analysis results, we extracted voxels that showed a strong correlation between source activity and symptom scores. We then built a prediction model to estimate the symptom severity scores of each patient using the source activations of the selected voxels. The symptom scores of the Positive and Negative Syndrome Scale (PANSS) were estimated using the linear prediction model. The results of leave-one-out cross validation (LOOCV) showed that the mean errors of the estimated symptom scores were 3.34 ± 2.40 and 3.90 ± 3.01 for the Positive and Negative PANSS scores, respectively. The current pilot study is the first attempt to estimate symptom severity scores in schizophrenia using quantitative EEG features. It is expected that the present method can be extended to other cognitive paradigms or other psychological illnesses.
A novel approach for estimating ingested dose associated with paracetamol overdose
Zurlinden, Todd J.; Heard, Kennon
2015-01-01
Aim In cases of paracetamol (acetaminophen, APAP) overdose, an accurate estimate of tissue‐specific paracetamol pharmacokinetics (PK) and ingested dose can offer health care providers important information for the individualized treatment and follow‐up of affected patients. Here a novel methodology is presented to make such estimates using a standard serum paracetamol measurement and a computational framework. Methods The core component of the computational framework was a physiologically‐based pharmacokinetic (PBPK) model developed and evaluated using an extensive set of human PK data. Bayesian inference was used for parameter and dose estimation, allowing the incorporation of inter‐study variability, and facilitating the calculation of uncertainty in model outputs. Results Simulations of paracetamol time course concentrations in the blood were in close agreement with experimental data under a wide range of dosing conditions. Also, predictions of administered dose showed good agreement with a large collection of clinical and emergency setting PK data over a broad dose range. In addition to dose estimation, the platform was applied for the determination of optimal blood sampling times for dose reconstruction and quantitation of the potential role of paracetamol conjugate measurement on dose estimation. Conclusions Current therapies for paracetamol overdose rely on a generic methodology involving the use of a clinical nomogram. By using the computational framework developed in this study, serum sample data, and the individual patient's anthropometric and physiological information, personalized serum and liver pharmacokinetic profiles and dose estimate could be generated to help inform an individualized overdose treatment and follow‐up plan. PMID:26441245
A novel approach for estimating ingested dose associated with paracetamol overdose.
Zurlinden, Todd J; Heard, Kennon; Reisfeld, Brad
2016-04-01
In cases of paracetamol (acetaminophen, APAP) overdose, an accurate estimate of tissue-specific paracetamol pharmacokinetics (PK) and ingested dose can offer health care providers important information for the individualized treatment and follow-up of affected patients. Here a novel methodology is presented to make such estimates using a standard serum paracetamol measurement and a computational framework. The core component of the computational framework was a physiologically-based pharmacokinetic (PBPK) model developed and evaluated using an extensive set of human PK data. Bayesian inference was used for parameter and dose estimation, allowing the incorporation of inter-study variability, and facilitating the calculation of uncertainty in model outputs. Simulations of paracetamol time course concentrations in the blood were in close agreement with experimental data under a wide range of dosing conditions. Also, predictions of administered dose showed good agreement with a large collection of clinical and emergency setting PK data over a broad dose range. In addition to dose estimation, the platform was applied for the determination of optimal blood sampling times for dose reconstruction and quantitation of the potential role of paracetamol conjugate measurement on dose estimation. Current therapies for paracetamol overdose rely on a generic methodology involving the use of a clinical nomogram. By using the computational framework developed in this study, serum sample data, and the individual patient's anthropometric and physiological information, personalized serum and liver pharmacokinetic profiles and dose estimate could be generated to help inform an individualized overdose treatment and follow-up plan. © 2015 The British Pharmacological Society.
Usefulness of telomere length in DNA from human teeth for age estimation.
Márquez-Ruiz, Ana Belén; González-Herrera, Lucas; Valenzuela, Aurora
2018-03-01
Age estimation is widely used to identify individuals in forensic medicine. However, the accuracy of the most commonly used procedures is markedly reduced in adulthood, and these methods cannot be applied in practice when morphological information is limited. Molecular methods for age estimation have been extensively developed in the last few years. The fact that telomeres shorten at each round of cell division has led to the hypothesis that telomere length can be used as a tool to predict age. The present study thus aimed to assess the correlation between telomere length measured in dental DNA and age, and the effect of sex and tooth type on telomere length; a further aim was to propose a statistical regression model to estimate the biological age based on telomere length. DNA was extracted from 91 tooth samples belonging to 77 individuals of both sexes and 15 to 85 years old and was used to determine telomere length by quantitative real-time PCR. Our results suggested that telomere length was not affected by sex and was greater in molar teeth. We found a significant correlation between age and telomere length measured in DNA from teeth. However, the equation proposed to predict age was not accurate enough for forensic age estimation on its own. Age estimation based on telomere length in DNA from tooth samples may be useful as a complementary method which provides an approximate estimate of age, especially when human skeletal remains are the only forensic sample available.
Elschot, Mattijs; Nijsen, Johannes F W; Lam, Marnix G E H; Smits, Maarten L J; Prince, Jip F; Viergever, Max A; van den Bosch, Maurice A A J; Zonnenberg, Bernard A; de Jong, Hugo W A M
2014-10-01
Radiation pneumonitis is a rare but serious complication of radioembolic therapy of liver tumours. Estimation of the mean absorbed dose to the lungs based on pretreatment diagnostic (99m)Tc-macroaggregated albumin ((99m)Tc-MAA) imaging should prevent this, with administered activities adjusted accordingly. The accuracy of (99m)Tc-MAA-based lung absorbed dose estimates was evaluated and compared to absorbed dose estimates based on pretreatment diagnostic (166)Ho-microsphere imaging and to the actual lung absorbed doses after (166)Ho radioembolization. This prospective clinical study included 14 patients with chemorefractory, unresectable liver metastases treated with (166)Ho radioembolization. (99m)Tc-MAA-based and (166)Ho-microsphere-based estimation of lung absorbed doses was performed on pretreatment diagnostic planar scintigraphic and SPECT/CT images. The clinical analysis was preceded by an anthropomorphic torso phantom study with simulated lung shunt fractions of 0 to 30 % to determine the accuracy of the image-based lung absorbed dose estimates after (166)Ho radioembolization. In the phantom study, (166)Ho SPECT/CT-based lung absorbed dose estimates were more accurate (absolute error range 0.1 to -4.4 Gy) than (166)Ho planar scintigraphy-based lung absorbed dose estimates (absolute error range 9.5 to 12.1 Gy). Clinically, the actual median lung absorbed dose was 0.02 Gy (range 0.0 to 0.7 Gy) based on posttreatment (166)Ho-microsphere SPECT/CT imaging. Lung absorbed doses estimated on the basis of pretreatment diagnostic (166)Ho-microsphere SPECT/CT imaging (median 0.02 Gy, range 0.0 to 0.4 Gy) were significantly better predictors of the actual lung absorbed doses than doses estimated on the basis of (166)Ho-microsphere planar scintigraphy (median 10.4 Gy, range 4.0 to 17.3 Gy; p < 0.001), (99m)Tc-MAA SPECT/CT imaging (median 2.5 Gy, range 1.2 to 12.3 Gy; p < 0.001), and (99m)Tc-MAA planar scintigraphy (median 5.5 Gy, range 2.3 to 18.2 Gy; p < 0.001). In clinical practice, lung absorbed doses are significantly overestimated by pretreatment diagnostic (99m)Tc-MAA imaging. Pretreatment diagnostic (166)Ho-microsphere SPECT/CT imaging accurately predicts lung absorbed doses after (166)Ho radioembolization.
Zu Erbach-Schoenberg, Elisabeth; Alegana, Victor A; Sorichetta, Alessandro; Linard, Catherine; Lourenço, Christoper; Ruktanonchai, Nick W; Graupe, Bonita; Bird, Tomas J; Pezzulo, Carla; Wesolowski, Amy; Tatem, Andrew J
2016-01-01
Reliable health metrics are crucial for accurately assessing disease burden and planning interventions. Many health indicators are measured through passive surveillance systems and are reliant on accurate estimates of denominators to transform case counts into incidence measures. These denominator estimates generally come from national censuses and use large area growth rates to estimate annual changes. Typically, they do not account for any seasonal fluctuations and thus assume a static denominator population. Many recent studies have highlighted the dynamic nature of human populations through quantitative analyses of mobile phone call data records and a range of other sources, emphasizing seasonal changes. In this study, we use mobile phone data to capture patterns of short-term human population movement and to map dynamism in population densities. We show how mobile phone data can be used to measure seasonal changes in health district population numbers, which are used as denominators for calculating district-level disease incidence. Using the example of malaria case reporting in Namibia we use 3.5 years of phone data to investigate the spatial and temporal effects of fluctuations in denominators caused by seasonal mobility on malaria incidence estimates. We show that even in a sparsely populated country with large distances between population centers, such as Namibia, populations are highly dynamic throughout the year. We highlight how seasonal mobility affects malaria incidence estimates, leading to differences of up to 30 % compared to estimates created using static population maps. These differences exhibit clear spatial patterns, with likely overestimation of incidence in the high-prevalence zones in the north of Namibia and underestimation in lower-risk areas when compared to using static populations. The results here highlight how health metrics that rely on static estimates of denominators from censuses may differ substantially once mobility and seasonal variations are taken into account. With respect to the setting of malaria in Namibia, the results indicate that Namibia may actually be closer to malaria elimination than previously thought. More broadly, the results highlight how dynamic populations are. In addition to affecting incidence estimates, these changes in population density will also have an impact on allocation of medical resources. Awareness of seasonal movements has the potential to improve the impact of interventions, such as vaccination campaigns or distributions of commodities like bed nets.
The Mapping Model: A Cognitive Theory of Quantitative Estimation
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2008-01-01
How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…
Wang, Peng; Zhang, Cheng; Liu, Hong-Wen; Xiong, Mengyi; Yin, Sheng-Yan; Yang, Yue; Hu, Xiao-Xiao; Yin, Xia; Zhang, Xiao-Bing; Tan, Weihong
2017-12-01
Fluorescence quantitative analyses for vital biomolecules are in great demand in biomedical science owing to their unique detection advantages with rapid, sensitive, non-damaging and specific identification. However, available fluorescence strategies for quantitative detection are usually hard to design and achieve. Inspired by supramolecular chemistry, a two-photon-excited fluorescent supramolecular nanoplatform ( TPSNP ) was designed for quantitative analysis with three parts: host molecules (β-CD polymers), a guest fluorophore of sensing probes (Np-Ad) and a guest internal reference (NpRh-Ad). In this strategy, the TPSNP possesses the merits of (i) improved water-solubility and biocompatibility; (ii) increased tissue penetration depth for bioimaging by two-photon excitation; (iii) quantitative and tunable assembly of functional guest molecules to obtain optimized detection conditions; (iv) a common approach to avoid the limitation of complicated design by adjustment of sensing probes; and (v) accurate quantitative analysis by virtue of reference molecules. As a proof-of-concept, we utilized the two-photon fluorescent probe NHS-Ad-based TPSNP-1 to realize accurate quantitative analysis of hydrogen sulfide (H 2 S), with high sensitivity and good selectivity in live cells, deep tissues and ex vivo -dissected organs, suggesting that the TPSNP is an ideal quantitative indicator for clinical samples. What's more, TPSNP will pave the way for designing and preparing advanced supramolecular sensors for biosensing and biomedicine.
Heo, Seo Weon; Kim, Hyungsuk
2010-05-01
An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.
Nayan, Nazrul Anuar; Risman, Nur Sabrina; Jaafar, Rosmina
2016-07-27
Among vital signs of acutely ill hospital patients, respiratory rate (RR) is a highly accurate predictor of health deterioration. This study proposes a system that consists of a passive and non-invasive single-lead electrocardiogram (ECG) acquisition module and an ECG-derived respiratory (EDR) algorithm in the working prototype of a mobile application. Before estimating RR that produces the EDR rate, ECG signals were evaluated based on the signal quality index (SQI). The SQI algorithm was validated quantitatively using the PhysioNet/Computing in Cardiology Challenge 2011 training data set. The RR extraction algorithm was validated by adopting 40 MIT PhysioNet Multiparameter Intelligent Monitoring in Intensive Care II data set. The estimated RR showed a mean absolute error (MAE) of 1.4 compared with the ``gold standard'' RR. The proposed system was used to record 20 ECGs of healthy subjects and obtained the estimated RR with MAE of 0.7 bpm. Results indicate that the proposed hardware and algorithm could replace the manual counting method, uncomfortable nasal airflow sensor, chest band, and impedance pneumotachography often used in hospitals. The system also takes advantage of the prevalence of smartphone usage and increase the monitoring frequency of the current ECG of patients with critical illnesses.
NASA Astrophysics Data System (ADS)
Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe
2017-10-01
Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.
NASA Astrophysics Data System (ADS)
Murakami, Hiroki; Watanabe, Tsuneo; Fukuoka, Daisuke; Terabayashi, Nobuo; Hara, Takeshi; Muramatsu, Chisako; Fujita, Hiroshi
2016-04-01
The word "Locomotive syndrome" has been proposed to describe the state of requiring care by musculoskeletal disorders and its high-risk condition. Reduction of the knee extension strength is cited as one of the risk factors, and the accurate measurement of the strength is needed for the evaluation. The measurement of knee extension strength using a dynamometer is one of the most direct and quantitative methods. This study aims to develop a system for measuring the knee extension strength using the ultrasound images of the rectus femoris muscles obtained with non-invasive ultrasonic diagnostic equipment. First, we extract the muscle area from the ultrasound images and determine the image features, such as the thickness of the muscle. We combine these features and physical features, such as the patient's height, and build a regression model of the knee extension strength from training data. We have developed a system for estimating the knee extension strength by applying the regression model to the features obtained from test data. Using the test data of 168 cases, correlation coefficient value between the measured values and estimated values was 0.82. This result suggests that this system can estimate knee extension strength with high accuracy.
Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy
2009-01-01
Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.
Protein Folding Free Energy Landscape along the Committor - the Optimal Folding Coordinate.
Krivov, Sergei V
2018-06-06
Recent advances in simulation and experiment have led to dramatic increases in the quantity and complexity of produced data, which makes the development of automated analysis tools very important. A powerful approach to analyze dynamics contained in such data sets is to describe/approximate it by diffusion on a free energy landscape - free energy as a function of reaction coordinates (RC). For the description to be quantitatively accurate, RCs should be chosen in an optimal way. Recent theoretical results show that such an optimal RC exists; however, determining it for practical systems is a very difficult unsolved problem. Here we describe a solution to this problem. We describe an adaptive nonparametric approach to accurately determine the optimal RC (the committor) for an equilibrium trajectory of a realistic system. In contrast to alternative approaches, which require a functional form with many parameters to approximate an RC and thus extensive expertise with the system, the suggested approach is nonparametric and can approximate any RC with high accuracy without system specific information. To avoid overfitting for a realistically sampled system, the approach performs RC optimization in an adaptive manner by focusing optimization on less optimized spatiotemporal regions of the RC. The power of the approach is illustrated on a long equilibrium atomistic folding simulation of HP35 protein. We have determined the optimal folding RC - the committor, which was confirmed by passing a stringent committor validation test. It allowed us to determine a first quantitatively accurate protein folding free energy landscape. We have confirmed the recent theoretical results that diffusion on such a free energy profile can be used to compute exactly the equilibrium flux, the mean first passage times, and the mean transition path times between any two points on the profile. We have shown that the mean squared displacement along the optimal RC grows linear with time as for simple diffusion. The free energy profile allowed us to obtain a direct rigorous estimate of the pre-exponential factor for the folding dynamics.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
NASA Astrophysics Data System (ADS)
Farmann, Alexander; Waag, Wladislaw; Marongiu, Andrea; Sauer, Dirk Uwe
2015-05-01
This work provides an overview of available methods and algorithms for on-board capacity estimation of lithium-ion batteries. An accurate state estimation for battery management systems in electric vehicles and hybrid electric vehicles is becoming more essential due to the increasing attention paid to safety and lifetime issues. Different approaches for the estimation of State-of-Charge, State-of-Health and State-of-Function are discussed and analyzed by many authors and researchers in the past. On-board estimation of capacity in large lithium-ion battery packs is definitely one of the most crucial challenges of battery monitoring in the aforementioned vehicles. This is mostly due to high dynamic operation and conditions far from those used in laboratory environments as well as the large variation in aging behavior of each cell in the battery pack. Accurate capacity estimation allows an accurate driving range prediction and accurate calculation of a battery's maximum energy storage capability in a vehicle. At the same time it acts as an indicator for battery State-of-Health and Remaining Useful Lifetime estimation.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Lee, Hyunjong; Kim, Ji Hyun; Kang, Yeon-koo; Moon, Jae Hoon; So, Young; Lee, Won Woo
2016-01-01
Abstract Objectives: Technetium pertechnetate (99mTcO4) is a radioactive tracer used to assess thyroid function by thyroid uptake system (TUS). However, the TUS often fails to deliver accurate measurements of the percent of thyroid uptake (%thyroid uptake) of 99mTcO4. Here, we investigated the usefulness of quantitative single-photon emission computed tomography/computed tomography (SPECT/CT) after injection of 99mTcO4 in detecting thyroid function abnormalities. Materials and methods: We retrospectively reviewed data from 50 patients (male:female = 15:35; age, 46.2 ± 16.3 years; 17 Graves disease, 13 thyroiditis, and 20 euthyroid). All patients underwent 99mTcO4 quantitative SPECT/CT (185 MBq = 5 mCi), which yielded %thyroid uptake and standardized uptake value (SUV). Twenty-one (10 Graves disease and 11 thyroiditis) of the 50 patients also underwent conventional %thyroid uptake measurements using a TUS. Results: Quantitative SPECT/CT parameters (%thyroid uptake, SUVmean, and SUVmax) were the highest in Graves disease, second highest in euthyroid, and lowest in thyroiditis (P < 0.0001, Kruskal–Wallis test). TUS significantly overestimated the %thyroid uptake compared with SPECT/CT (P < 0.0001, paired t test) because other 99mTcO4 sources in addition to thyroid, such as salivary glands and saliva, contributed to the %thyroid uptake result by TUS, whereas %thyroid uptake, SUVmean and SUVmax from the SPECT/CT were associated with the functional status of thyroid. Conclusions: Quantitative SPECT/CT is more accurate than conventional TUS for measuring 99mTcO4 %thyroid uptake. Quantitative measurements using SPECT/CT may facilitate more accurate assessment of thyroid tracer uptake. PMID:27399139
Richard-Davis, Gloria; Whittemore, Brianna; Disher, Anthony; Rice, Valerie Montgomery; Lenin, Rathinasamy B; Dollins, Camille; Siegel, Eric R; Eswaran, Hari
2018-01-01
Objective: Increased mammographic breast density is a well-established risk factor for breast cancer development, regardless of age or ethnic background. The current gold standard for categorizing breast density consists of a radiologist estimation of percent density according to the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) criteria. This study compares paired qualitative interpretations of breast density on digital mammograms with quantitative measurement of density using Hologic’s Food and Drug Administration–approved R2 Quantra volumetric breast density assessment tool. Our goal was to find the best cutoff value of Quantra-calculated breast density for stratifying patients accurately into high-risk and low-risk breast density categories. Methods: Screening digital mammograms from 385 subjects, aged 18 to 64 years, were evaluated. These mammograms were interpreted by a radiologist using the ACR’s BI-RADS density method, and had quantitative density measured using the R2 Quantra breast density assessment tool. The appropriate cutoff for breast density–based risk stratification using Quantra software was calculated using manually determined BI-RADS scores as a gold standard, in which scores of D3/D4 denoted high-risk densities and D1/D2 denoted low-risk densities. Results: The best cutoff value for risk stratification using Quantra-calculated breast density was found to be 14.0%, yielding a sensitivity of 65%, specificity of 77%, and positive and negative predictive values of 75% and 69%, respectively. Under bootstrap analysis, the best cutoff value had a mean ± SD of 13.70% ± 0.89%. Conclusions: Our study is the first to publish on a North American population that assesses the accuracy of the R2 Quantra system at breast density stratification. Quantitative breast density measures will improve accuracy and reliability of density determination, assisting future researchers to accurately calculate breast cancer risks associated with density increase. PMID:29511356
Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger; Blutke, Andreas
2018-01-01
The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1-3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability of SR to efficiently provide accurate section thickness measurements as a prerequisite for reliable estimates of dependent quantitative stereological parameters.
Richard-Davis, Gloria; Whittemore, Brianna; Disher, Anthony; Rice, Valerie Montgomery; Lenin, Rathinasamy B; Dollins, Camille; Siegel, Eric R; Eswaran, Hari
2018-01-01
Increased mammographic breast density is a well-established risk factor for breast cancer development, regardless of age or ethnic background. The current gold standard for categorizing breast density consists of a radiologist estimation of percent density according to the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) criteria. This study compares paired qualitative interpretations of breast density on digital mammograms with quantitative measurement of density using Hologic's Food and Drug Administration-approved R2 Quantra volumetric breast density assessment tool. Our goal was to find the best cutoff value of Quantra-calculated breast density for stratifying patients accurately into high-risk and low-risk breast density categories. Screening digital mammograms from 385 subjects, aged 18 to 64 years, were evaluated. These mammograms were interpreted by a radiologist using the ACR's BI-RADS density method, and had quantitative density measured using the R2 Quantra breast density assessment tool. The appropriate cutoff for breast density-based risk stratification using Quantra software was calculated using manually determined BI-RADS scores as a gold standard, in which scores of D3/D4 denoted high-risk densities and D1/D2 denoted low-risk densities. The best cutoff value for risk stratification using Quantra-calculated breast density was found to be 14.0%, yielding a sensitivity of 65%, specificity of 77%, and positive and negative predictive values of 75% and 69%, respectively. Under bootstrap analysis, the best cutoff value had a mean ± SD of 13.70% ± 0.89%. Our study is the first to publish on a North American population that assesses the accuracy of the R2 Quantra system at breast density stratification. Quantitative breast density measures will improve accuracy and reliability of density determination, assisting future researchers to accurately calculate breast cancer risks associated with density increase.
Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger
2018-01-01
The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1–3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability of SR to efficiently provide accurate section thickness measurements as a prerequisite for reliable estimates of dependent quantitative stereological parameters. PMID:29444158
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardisty, M.; Gordon, L.; Agarwal, P.
2007-08-15
Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of anmore » atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.« less
NASA Astrophysics Data System (ADS)
He, Bin; Frey, Eric C.
2010-06-01
Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were linear in the shift for both the QSPECT and QPlanar methods. QPlanar was less sensitive to object definition perturbations than QSPECT, especially for dilation and erosion cases. Up to 1 voxel misregistration or misdefinition resulted in up to 8% error in organ activity estimates, with the largest errors for small or low uptake organs. Both types of VOI definition errors produced larger errors in activity estimates for a small and low uptake organs (i.e. -7.5% to 5.3% for the left kidney) than for a large and high uptake organ (i.e. -2.9% to 2.1% for the liver). We observed that misregistration generally had larger effects than misdefinition, with errors ranging from -7.2% to 8.4%. The different imaging methods evaluated responded differently to the errors from misregistration and misdefinition. We found that QSPECT was more sensitive to misdefinition errors, but less sensitive to misregistration errors, as compared to the QPlanar method. Thus, sensitivity to VOI definition errors should be an important criterion in evaluating quantitative imaging methods.
Modeling qRT-PCR dynamics with application to cancer biomarker quantification.
Chervoneva, Inna; Freydin, Boris; Hyslop, Terry; Waldman, Scott A
2017-01-01
Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is widely used for molecular diagnostics and evaluating prognosis in cancer. The utility of mRNA expression biomarkers relies heavily on the accuracy and precision of quantification, which is still challenging for low abundance transcripts. The critical step for quantification is accurate estimation of efficiency needed for computing a relative qRT-PCR expression. We propose a new approach to estimating qRT-PCR efficiency based on modeling dynamics of polymerase chain reaction amplification. In contrast, only models for fluorescence intensity as a function of polymerase chain reaction cycle have been used so far for quantification. The dynamics of qRT-PCR efficiency is modeled using an ordinary differential equation model, and the fitted ordinary differential equation model is used to obtain effective polymerase chain reaction efficiency estimates needed for efficiency-adjusted quantification. The proposed new qRT-PCR efficiency estimates were used to quantify GUCY2C (Guanylate Cyclase 2C) mRNA expression in the blood of colorectal cancer patients. Time to recurrence and GUCY2C expression ratios were analyzed in a joint model for survival and longitudinal outcomes. The joint model with GUCY2C quantified using the proposed polymerase chain reaction efficiency estimates provided clinically meaningful results for association between time to recurrence and longitudinal trends in GUCY2C expression.
NASA Astrophysics Data System (ADS)
Debebe, Senait A.; Franquiz, Juan; McGoron, Anthony J.
2015-03-01
Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium - 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Uncertainties of Mayak urine data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir
2008-01-01
For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3more » to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.« less
Motion immune diffusion imaging using augmented MUSE (AMUSE) for high-resolution multi-shot EPI
Guhaniyogi, Shayan; Chu, Mei-Lan; Chang, Hing-Chiu; Song, Allen W.; Chen, Nan-kuei
2015-01-01
Purpose To develop new techniques for reducing the effects of microscopic and macroscopic patient motion in diffusion imaging acquired with high-resolution multi-shot EPI. Theory The previously reported Multiplexed Sensitivity Encoding (MUSE) algorithm is extended to account for macroscopic pixel misregistrations as well as motion-induced phase errors in a technique called Augmented MUSE (AMUSE). Furthermore, to obtain more accurate quantitative DTI measures in the presence of subject motion, we also account for the altered diffusion encoding among shots arising from macroscopic motion. Methods MUSE and AMUSE were evaluated on simulated and in vivo motion-corrupted multi-shot diffusion data. Evaluations were made both on the resulting imaging quality and estimated diffusion tensor metrics. Results AMUSE was found to reduce image blurring resulting from macroscopic subject motion compared to MUSE, but yielded inaccurate tensor estimations when neglecting the altered diffusion encoding. Including the altered diffusion encoding in AMUSE produced better estimations of diffusion tensors. Conclusion The use of AMUSE allows for improved image quality and diffusion tensor accuracy in the presence of macroscopic subject motion during multi-shot diffusion imaging. These techniques should facilitate future high-resolution diffusion imaging. PMID:25762216
Molteni, Matteo; Magatti, Davide; Cardinali, Barbara; Rocco, Mattia; Ferri, Fabio
2013-01-01
The average pore size ξ0 of filamentous networks assembled from biological macromolecules is one of the most important physical parameters affecting their biological functions. Modern optical methods, such as confocal microscopy, can noninvasively image such networks, but extracting a quantitative estimate of ξ0 is a nontrivial task. We present here a fast and simple method based on a two-dimensional bubble approach, which works by analyzing one by one the (thresholded) images of a series of three-dimensional thin data stacks. No skeletonization or reconstruction of the full geometry of the entire network is required. The method was validated by using many isotropic in silico generated networks of different structures, morphologies, and concentrations. For each type of network, the method provides accurate estimates (a few percent) of the average and the standard deviation of the three-dimensional distribution of the pore sizes, defined as the diameters of the largest spheres that can be fit into the pore zones of the entire gel volume. When applied to the analysis of real confocal microscopy images taken on fibrin gels, the method provides an estimate of ξ0 consistent with results from elastic light scattering data. PMID:23473499
Spibey, C A; Jackson, P; Herick, K
2001-03-01
In recent years the use of fluorescent dyes in biological applications has dramatically increased. The continual improvement in the capabilities of these fluorescent dyes demands increasingly sensitive detection systems that provide accurate quantitation over a wide linear dynamic range. In the field of proteomics, the detection, quantitation and identification of very low abundance proteins are of extreme importance in understanding cellular processes. Therefore, the instrumentation used to acquire an image of such samples, for spot picking and identification by mass spectrometry, must be sensitive enough to be able, not only, to maximise the sensitivity and dynamic range of the staining dyes but, as importantly, adapt to the ever changing portfolio of fluorescent dyes as they become available. Just as the available fluorescent probes are improving and evolving so are the users application requirements. Therefore, the instrumentation chosen must be flexible to address and adapt to those changing needs. As a result, a highly competitive market for the supply and production of such dyes and the instrumentation for their detection and quantitation have emerged. The instrumentation currently available is based on either laser/photomultiplier tube (PMT) scanning or lamp/charge-coupled device (CCD) based mechanisms. This review briefly discusses the advantages and disadvantages of both System types for fluorescence imaging, gives a technical overview of CCD technology and describes in detail a unique xenon/are lamp CCD based instrument, from PerkinElmer Life Sciences. The Wallac-1442 ARTHUR is unique in its ability to scan both large areas at high resolution and give accurate selectable excitation over the whole of the UV/visible range. It operates by filtering both the excitation and emission wavelengths, providing optimal and accurate measurement and quantitation of virtually any available dye and allows excellent spectral resolution between different fluorophores. This flexibility and excitation accuracy is key to multicolour applications and future adaptation of the instrument to address the application requirements and newly emerging dyes.
Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D
2016-06-01
A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants.
Can Value-Added Measures of Teacher Performance Be Trusted?
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.
2015-01-01
We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures…
Michael J. Firko; Jane Leslie Hayes
1990-01-01
Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...
Ellis, David I; Broadhurst, David; Kell, Douglas B; Rowland, Jem J; Goodacre, Royston
2002-06-01
Fourier transform infrared (FT-IR) spectroscopy is a rapid, noninvasive technique with considerable potential for application in the food and related industries. We show here that this technique can be used directly on the surface of food to produce biochemically interpretable "fingerprints." Spoilage in meat is the result of decomposition and the formation of metabolites caused by the growth and enzymatic activity of microorganisms. FT-IR was exploited to measure biochemical changes within the meat substrate, enhancing and accelerating the detection of microbial spoilage. Chicken breasts were purchased from a national retailer, comminuted for 10 s, and left to spoil at room temperature for 24 h. Every hour, FT-IR measurements were taken directly from the meat surface using attenuated total reflectance, and the total viable counts were obtained by classical plating methods. Quantitative interpretation of FT-IR spectra was possible using partial least-squares regression and allowed accurate estimates of bacterial loads to be calculated directly from the meat surface in 60 s. Genetic programming was used to derive rules showing that at levels of 10(7) bacteria.g(-1) the main biochemical indicator of spoilage was the onset of proteolysis. Thus, using FT-IR we were able to acquire a metabolic snapshot and quantify, noninvasively, the microbial loads of food samples accurately and rapidly in 60 s, directly from the sample surface. We believe this approach will aid in the Hazard Analysis Critical Control Point process for the assessment of the microbiological safety of food at the production, processing, manufacturing, packaging, and storage levels.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
NASA Astrophysics Data System (ADS)
Robertson, K. M.; Milliken, R. E.; Li, S.
2016-10-01
Quantitative mineral abundances of lab derived clay-gypsum mixtures were estimated using a revised Hapke VIS-NIR and Shkuratov radiative transfer model. Montmorillonite-gypsum mixtures were used to test the effectiveness of the model in distinguishing between subtle differences in minor absorption features that are diagnostic of mineralogy in the presence of strong H2O absorptions that are not always diagnostic of distinct phases or mineral abundance. The optical constants (k-values) for both endmembers were determined from bi-directional reflectance spectra measured in RELAB as well as on an ASD FieldSpec3 in a controlled laboratory setting. Multiple size fractions were measured in order to derive a single k-value from optimization of the optical path length in the radiative transfer models. It is shown that with careful experimental conditions, optical constants can be accurately determined from powdered samples using a field spectrometer, consistent with previous studies. Variability in the montmorillonite hydration level increased the uncertainties in the derived k-values, but estimated modal abundances for the mixtures were still within 5% of the measured values. Results suggest that the Hapke model works well in distinguishing between hydrated phases that have overlapping H2O absorptions and it is able to detect gypsum and montmorillonite in these simple mixtures where they are present at levels of ∼10%. Care must be taken however to derive k-values from a sample with appropriate H2O content relative to the modeled spectra. These initial results are promising for the potential quantitative analysis of orbital remote sensing data of hydrated minerals, including more complex clay and sulfate assemblages such as mudstones examined by the Curiosity rover in Gale crater.
Andrews, John T.; Kristjansdottir, Greta B.; Eberl, Dennis D.; Jennings, Anne E.
2013-01-01
This paper re-evaluates how well quantitative x-ray diffraction (qXRD) can be used as an exploratory method of the weight percentage (wt%) of volcaniclastic sediment, and to identify tephra events in marine cores. In the widely used RockJock v6 software programme, qXRD tephra and glass standards include the rhyodacite White River tephra (Alaska), a rhyolitic tephra (Hekla-4) and the basaltic Saksunarvatn tephra. Experiments of adding known wt% of tephra to felsic bedrock samples indicated that additions ≥10 wt% are accurately detected, but reliable estimates of lesser amounts are masked by amorphous material produced by milling. Volcaniclastic inputs range between 20 and 50 wt%. Primary tephra events are identified as peaks in residual qXRD glass wt% from fourth-order polynomial fits. In cores where tephras have been identified by shard counts in the > 150 µm fraction, there is a positive correlation (validation) with peaks in the wt% glass estimated by qXRD. Geochemistry of tephra shards confirms the presence of several Hekla-sourced tephras in cores B997-317PC1 and -319PC2 on the northern Iceland shelf. In core B997-338 (north-west Iceland), there are two rhyolitic tephras separated by ca. 100 cm with uncorrected radiocarbon dates on articulated shells of around 13 000 yr B.P. These tephras may be correlatives of the Borrobol and Penifiler tephras found in Scotland. The number of Holocene tephra events per 1000 yr was estimated from qXRD on 16 cores and showed a bimodal distribution with an increased number of events in both the late and early Holocene.
Hysteresis and uncertainty in soil water-retention curve parameters
Likos, William J.; Lu, Ning; Godt, Jonathan W.
2014-01-01
Accurate estimates of soil hydraulic parameters representing wetting and drying paths are required for predicting hydraulic and mechanical responses in a large number of applications. A comprehensive suite of laboratory experiments was conducted to measure hysteretic soil-water characteristic curves (SWCCs) representing a wide range of soil types. Results were used to quantitatively assess differences and uncertainty in three simplifications frequently adopted to estimate wetting-path SWCC parameters from more easily measured drying curves. They are the following: (1) αw=2αd, (2) nw=nd, and (3) θws=θds, where α, n, and θs are fitting parameters entering van Genuchten’s commonly adopted SWCC model, and the superscripts w and d indicate wetting and drying paths, respectively. The average ratio αw/αd for the data set was 2.24±1.25. Nominally cohesive soils had a lower αw/αd ratio (1.73±0.94) than nominally cohesionless soils (3.14±1.27). The average nw/nd ratio was 1.01±0.11 with no significant dependency on soil type, thus confirming the nw=nd simplification for a wider range of soil types than previously available. Water content at zero suction during wetting (θws) was consistently less than during drying (θds) owing to air entrapment. The θws/θds ratio averaged 0.85±0.10 and was comparable for nominally cohesive (0.87±0.11) and cohesionless (0.81±0.08) soils. Regression statistics are provided to quantitatively account for uncertainty in estimating hysteretic retention curves. Practical consequences are demonstrated for two case studies.
Changes in body composition of neonatal piglets during growth
USDA-ARS?s Scientific Manuscript database
During studies of neonatal piglet growth it is important to be able to accurately assess changes in body composition. Previous studies have demonstrated that quantitative magnetic resonance (QMR) provides precise and accurate measurements of total body fat mass, lean mass and total body water in non...
A Method for Estimating Zero-Flow Pressure and Intracranial Pressure
Caren, Marzban; Paul, Raymond Illian; David, Morison; Anne, Moore; Michel, Kliot; Marek, Czosnyka; Pierre, Mourad
2012-01-01
Background It has been hypothesized that critical closing pressure of cerebral circulation, or zero-flow pressure (ZFP), can estimate intracranial pressure (ICP). One ZFP estimation method employs extrapolation of arterial blood pressure versus blood-flow velocity. The aim of this study is to improve ICP predictions. Methods Two revisions are considered: 1) The linear model employed for extrapolation is extended to a nonlinear equation, and 2) the parameters of the model are estimated by an alternative criterion (not least-squares). The method is applied to data on transcranial Doppler measurements of blood-flow velocity, arterial blood pressure, and ICP, from 104 patients suffering from closed traumatic brain injury, sampled across the United States and England. Results The revisions lead to qualitative (e.g., precluding negative ICP) and quantitative improvements in ICP prediction. In going from the original to the revised method, the ±2 standard deviation of error is reduced from 33 to 24 mm Hg; the root-mean-squared error (RMSE) is reduced from 11 to 8.2 mm Hg. The distribution of RMSE is tighter as well; for the revised method the 25th and 75th percentiles are 4.1 and 13.7 mm Hg, respectively, as compared to 5.1 and 18.8 mm Hg for the original method. Conclusions Proposed alterations to a procedure for estimating ZFP lead to more accurate and more precise estimates of ICP, thereby offering improved means of estimating it noninvasively. The quality of the estimates is inadequate for many applications, but further work is proposed which may lead to clinically useful results. PMID:22824923
Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.
Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L
2017-10-01
The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (p<0.005) and correlated with independently identified visual EEG patterns such as generalized periodic discharges (p<0.02). Receiver operating characteristic (ROC) analysis confirmed the predictive value of lower state space velocity for poor clinical outcome after cardiac arrest (AUC 80.8, 70% sensitivity, 15% false positive rate). Model-based quantitative EEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantitative Graphics in Newspapers.
ERIC Educational Resources Information Center
Tankard, James W., Jr.
The use of quantitative graphics in newspapers requires achieving a balance between being accurate and getting the attention of the reader. The statistical representations in newspapers are drawn by graphic designers whose key technique is fusion--the striking combination of two visual images. This technique often results in visual puns,…
Quantitative PCR for Detection and Enumeration of Genetic Markers of Bovine Fecal Pollution
Accurate assessment of health risks associated with bovine (cattle) fecal pollution requires a reliable host-specific genetic marker and a rapid quantification method. We report the development of quantitative PCR assays for the detection of two recently described cow feces-spec...
Influence of nuclear de-excitation on observables relevant for space exploration
NASA Astrophysics Data System (ADS)
Mancusi, Davide; Boudard, Alain; Cugnon, Joseph; David, Jean-Christophe; Leray, Sylvie
The composition of the space radiation environment inside spacecrafts is modified by the inter-action with shielding material, with equipment and even with the astronauts' bodies. Accurate quantitative estimates of the effects of nuclear reactions are necessary, for example, for dose estimation and prediction of single-event upset rates. To this end, it is necessary to construct predictive models for nuclear reactions, which usually consist of an intranuclear-cascade or quantum-molecular-dynamics stage, followed by a nuclear de-excitation stage. While it is generally acknowledged that it is necessary to accurately simulate the first reaction stage, transport-code users often neglect or underestimate the importance of the choice of the de-excitation code. The purpose of this work is to prove that the de-excitation model is in fact a non-negligible source of uncertainty for the prediction of several observables of crucial importance for space applications. For some particular observables, such as fragmentation cross sections, the systematic uncertainty due to the de-excitation model actually dominates the theoretical error. Our point will be illustrated by making use of calculations performed with several intranuclear-cascade/de-excitation models, such as the Li`ge Intranuclear Cascade model (INCL) and Isabel (for the cascade part) and ABLA, GEMINI++ and SMM (on the de-excitation side). We will also rely on the results of the recent IAEA intercomparison of spallation models, which can be used as informative groundwork for the evaluation of the global uncertainties involved in nucleon-nucleus reactions.
Skin sensitisation, vehicle effects and the local lymph node assay.
Basketter, D A; Gerberick, G F; Kimber, I
2001-06-01
Accurate risk assessment in allergic contact dermatitis is dependent on the successful prospective identification of chemicals which possess the ability to behave as skin sensitisers, followed by appropriate measurement of the relative ability to cause sensitisation; their potency. Tools for hazard identification have been available for many years; more recently, a novel approach to the quantitative assessment of potency--the derivation of EC3 values in the local lymph node assay (LLNA)--has been described. It must be recognised, however, that these evaluations of chemical sensitisers also may be affected by the vehicle matrix in which skin exposure occurs. In this article, our knowledge of this area is reviewed and potential mechanisms through which vehicle effects may occur are detailed. Using the LLNA as an example, it is demonstrated that the vehicle may have little impact on the accuracy of basic hazard identification; the data also therefore support the view that testing ingredients in specific product formulations is not warranted for hazard identification purposes. However, the effect on potency estimations is of greater significance. Although not all chemical allergens are affected similarly, for certain substances a greater than 10-fold vehicle-dependent change in potency is observed. Such data are vital for accurate risk assessment. Unfortunately, it does not at present appear possible to predict notionally the effect of the vehicle matrix on skin sensitising potency without recourse to direct testing, for example by estimation of LLNA EC3 data, which provides a valuable tool for this purpose.
Bharathi, D Vijaya; Hotha, Kishore Kumar; Jagadeesh, B; Chatki, Pankaj K; Thriveni, K; Mullangi, Ramesh; Naidu, A
2009-07-01
A highly selective, sensitive and accurate HPLC method has been developed and validated for the estimation of four proton-pump inhibitors (PPI), lansoprazole (LPZ), omeprazole (OPZ), pantoprazole (PPZ) and rabeprazole (RPZ), with 500 microL human plasma using zonisamide as an internal standard (IS). The sample preparation involved simple liquid-liquid extraction of LPZ, OPZ, PPZ and RPZ and IS from human plasma with ethyl acetate. The baseline separation of all the peaks was achieved with 0.1% triethylamine (pH 6.0):acetonitrile (72:28, v/v) at a flow rate of 1 mL/min on a Zorbax C(8) column. The total chromatographic run time was 11.0 min and the simultaneous elution of IS, OPZ, RPZ, PPZ and LPZ occurred at approximately 2.42, 4.45, 5.02 and 9.37 min, respectively. The method was proved to be accurate and precise at linearity range of 20.61-1999.79 ng/mL with a correlation coefficient (r) of >or=0.999. The limit of quantitation for each of the PPI studied was 20.61 ng/mL. The intra- and inter-day precision and accuracy values were found to be within the assay variability limits as per the FDA guidelines. The developed assay method was applied to a pharmacokinetic study in human volunteers. (c) 2009 John Wiley & Sons, Ltd.
Study on color difference estimation method of medicine biochemical analysis
NASA Astrophysics Data System (ADS)
Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun
2006-01-01
The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.
Systematic Identification of Preferred Orbits for Magnetospheric Missions. 1; Single Satellites
NASA Technical Reports Server (NTRS)
Stern, David P.
2000-01-01
This is a systematic attempt to identify and assess near-equatorial, high-eccentricity orbits best suited for studying the Earth's magnetosphere, in particular its most dynamic part, the plasma sheet of the magnetotail. The study was motivated by the design needs of a multi-spacecraft "constellation" mission, stressing low cost, minimal active control and economic launch strategies, and both quantitative and qualitative aspects were investigated. On one hand, by collecting hourly samples throughout the year, accurate estimates were obtained of the coverage of different regions, and of the frequency and duration of long eclipses. On the other hand, an intuitive understanding was developed of the factors which determine the merits of the mission, including long-range factors due to perturbations by the Moon, Sun and the Earth's equatorial bulge.
Local facet approximation for image stitching
NASA Astrophysics Data System (ADS)
Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun
2018-01-01
Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.
Marmarelis, Vasilis Z.; Berger, Theodore W.
2009-01-01
Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609
Quantitation of spatially-localized proteins in tissue samples using MALDI-MRM imaging.
Clemis, Elizabeth J; Smith, Derek S; Camenzind, Alexander G; Danell, Ryan M; Parker, Carol E; Borchers, Christoph H
2012-04-17
MALDI imaging allows the creation of a "molecular image" of a tissue slice. This image is reconstructed from the ion abundances in spectra obtained while rastering the laser over the tissue. These images can then be correlated with tissue histology to detect potential biomarkers of, for example, aberrant cell types. MALDI, however, is known to have problems with ion suppression, making it difficult to correlate measured ion abundance with concentration. It would be advantageous to have a method which could provide more accurate protein concentration measurements, particularly for screening applications or for precise comparisons between samples. In this paper, we report the development of a novel MALDI imaging method for the localization and accurate quantitation of proteins in tissues. This method involves optimization of in situ tryptic digestion, followed by reproducible and uniform deposition of an isotopically labeled standard peptide from a target protein onto the tissue, using an aerosol-generating device. Data is acquired by MALDI multiple reaction monitoring (MRM) mass spectrometry (MS), and accurate peptide quantitation is determined from the ratio of MRM transitions for the endogenous unlabeled proteolytic peptides to the corresponding transitions from the applied isotopically labeled standard peptides. In a parallel experiment, the quantity of the labeled peptide applied to the tissue was determined using a standard curve generated from MALDI time-of-flight (TOF) MS data. This external calibration curve was then used to determine the quantity of endogenous peptide in a given area. All standard curves generate by this method had coefficients of determination greater than 0.97. These proof-of-concept experiments using MALDI MRM-based imaging show the feasibility for the precise and accurate quantitation of tissue protein concentrations over 2 orders of magnitude, while maintaining the spatial localization information for the proteins.
Shope, Christopher L.; Angeroth, Cory E.
2015-01-01
Effective management of surface waters requires a robust understanding of spatiotemporal constituent loadings from upstream sources and the uncertainty associated with these estimates. We compared the total dissolved solids loading into the Great Salt Lake (GSL) for water year 2013 with estimates of previously sampled periods in the early 1960s.We also provide updated results on GSL loading, quantitatively bounded by sampling uncertainties, which are useful for current and future management efforts. Our statistical loading results were more accurate than those from simple regression models. Our results indicate that TDS loading to the GSL in water year 2013 was 14.6 million metric tons with uncertainty ranging from 2.8 to 46.3 million metric tons, which varies greatly from previous regression estimates for water year 1964 of 2.7 million metric tons. Results also indicate that locations with increased sampling frequency are correlated with decreasing confidence intervals. Because time is incorporated into the LOADEST models, discrepancies are largely expected to be a function of temporally lagged salt storage delivery to the GSL associated with terrestrial and in-stream processes. By incorporating temporally variable estimates and statistically derived uncertainty of these estimates,we have provided quantifiable variability in the annual estimates of dissolved solids loading into the GSL. Further, our results support the need for increased monitoring of dissolved solids loading into saline lakes like the GSL by demonstrating the uncertainty associated with different levels of sampling frequency.
Neeser, Rudolph; Ackermann, Rebecca Rogers; Gain, James
2009-09-01
Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches--mean substitution, thin plate splines, and multiple linear regression--for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Copyright 2009 Wiley-Liss, Inc.
Multi-model ensemble estimation of volume transport through the straits of the East/Japan Sea
NASA Astrophysics Data System (ADS)
Han, Sooyeon; Hirose, Naoki; Usui, Norihisa; Miyazawa, Yasumasa
2016-01-01
The volume transports measured at the Korea/Tsushima, Tsugaru, and Soya/La Perouse Straits remain quantitatively inconsistent. However, data assimilation models at least provide a self-consistent budget despite subtle differences among the models. This study examined the seasonal variation of the volume transport using the multiple linear regression and ridge regression of multi-model ensemble (MME) methods to estimate more accurately transport at these straits by using four different data assimilation models. The MME outperformed all of the single models by reducing uncertainties, especially the multicollinearity problem with the ridge regression. However, the regression constants turned out to be inconsistent with each other if the MME was applied separately for each strait. The MME for a connected system was thus performed to find common constants for these straits. The estimation of this MME was found to be similar to the MME result of sea level difference (SLD). The estimated mean transport (2.43 Sv) was smaller than the measurement data at the Korea/Tsushima Strait, but the calibrated transport of the Tsugaru Strait (1.63 Sv) was larger than the observed data. The MME results of transport and SLD also suggested that the standard deviation (STD) of the Korea/Tsushima Strait is larger than the STD of the observation, whereas the estimated results were almost identical to that observed for the Tsugaru and Soya/La Perouse Straits. The similarity between MME results enhances the reliability of the present MME estimation.
Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa
2002-08-01
Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.
Bromage, Erin S; Vadas, George G; Harvey, Ellen; Unger, Michael A; Kaattari, Stephen L
2007-10-15
Nitroaromatics are common pollutants of soil and groundwater at military installations because of their manufacture, storage, and use at these sites. Long-term monitoring of these pollutants comprise a significant percentage of restoration costs. Further, remediation activities often have to be delayed, while the samples are processed via traditional chemical assessment protocols. Here we describe a rapid (<5 min), cost-effective, accurate method using a KinExA Inline Biosensor for monitoring of 2,4,6-trinitrotoluene (TNT) in field water samples. The biosensor, which is based on KinExA technology, accurately estimated the concentration of TNT in double-blind comparisons with similar accuracy to traditional high-performance liquid chromatography(HPLC). In the assessment of field samples, the biosensor accurately predicted the concentration of TNT over the range of 1-30,000 microg/L when compared to either HPLC or quantitative gas chromatography-mass spectrometry (GC-MS). Various pre-assessment techniques were explored to examine whether field samples could be assessed untreated, without the removal of particulates or the use of solvents. In most cases, the KinExA Inline Biosensor gave a uniform assessment of TNT concentration independent of pretreatment method. This indicates that this sensor possesses significant promise for rapid, on-site assessment of TNT pollution in environmental water samples.
Mignon, C.; Tobin, D. J.; Zeitouny, M.; Uzunbajakava, N. E.
2018-01-01
Finding a path towards a more accurate prediction of light propagation in human skin remains an aspiration of biomedical scientists working on cutaneous applications both for diagnostic and therapeutic reasons. The objective of this study was to investigate variability of the optical properties of human skin compartments reported in literature, to explore the underlying rational of this variability and to propose a dataset of values, to better represent an in vivo case and recommend a solution towards a more accurate prediction of light propagation through cutaneous compartments. To achieve this, we undertook a novel, logical yet simple approach. We first reviewed scientific articles published between 1981 and 2013 that reported on skin optical properties, to reveal the spread in the reported quantitative values. We found variations of up to 100-fold. Then we extracted the most trust-worthy datasets guided by a rule that the spectral properties should reflect the specific biochemical composition of each of the skin layers. This resulted in the narrowing of the spread in the calculated photon densities to 6-fold. We conclude with a recommendation to use the identified most robust datasets when estimating light propagation in human skin using Monte Carlo simulations. Alternatively, otherwise follow our proposed strategy to screen any new datasets to determine their biological relevance. PMID:29552418
Cross-Sectional HIV Incidence Estimation in HIV Prevention Research
Brookmeyer, Ron; Laeyendecker, Oliver; Donnell, Deborah; Eshleman, Susan H.
2013-01-01
Accurate methods for estimating HIV incidence from cross-sectional samples would have great utility in prevention research. This report describes recent improvements in cross-sectional methods that significantly improve their accuracy. These improvements are based on the use of multiple biomarkers to identify recent HIV infections. These multi-assay algorithms (MAAs) use assays in a hierarchical approach for testing that minimizes the effort and cost of incidence estimation. These MAAs do not require mathematical adjustments for accurate estimation of the incidence rates in study populations in the year prior to sample collection. MAAs provide a practical, accurate, and cost-effective approach for cross-sectional HIV incidence estimation that can be used for HIV prevention research and global epidemic monitoring. PMID:23764641
Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja
2016-11-01
To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Can Value-Added Measures of Teacher Performance Be Trusted? Working Paper #18
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Woolridge, Jeffrey M.
2012-01-01
We investigate whether commonly used value-added estimation strategies can produce accurate estimates of teacher effects. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. No one method accurately captures true teacher effects in all scenarios,…
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
Ramírez, Juan Carlos; Cura, Carolina Inés; Moreira, Otacilio da Cruz; Lages-Silva, Eliane; Juiz, Natalia; Velázquez, Elsa; Ramírez, Juan David; Alberti, Anahí; Pavia, Paula; Flores-Chávez, María Delmans; Muñoz-Calderón, Arturo; Pérez-Morales, Deyanira; Santalla, José; Guedes, Paulo Marcos da Matta; Peneau, Julie; Marcet, Paula; Padilla, Carlos; Cruz-Robles, David; Valencia, Edward; Crisante, Gladys Elena; Greif, Gonzalo; Zulantay, Inés; Costales, Jaime Alfredo; Alvarez-Martínez, Miriam; Martínez, Norma Edith; Villarroel, Rodrigo; Villarroel, Sandro; Sánchez, Zunilda; Bisio, Margarita; Parrado, Rudy; Galvão, Lúcia Maria da Cunha; da Câmara, Antonia Cláudia Jácome; Espinoza, Bertha; de Noya, Belkisyole Alarcón; Puerta, Concepción; Riarte, Adelina; Diosque, Patricio; Sosa-Estani, Sergio; Guhl, Felipe; Ribeiro, Isabela; Aznar, Christine; Britto, Constança; Yadón, Zaida Estela; Schijman, Alejandro G.
2015-01-01
An international study was performed by 26 experienced PCR laboratories from 14 countries to assess the performance of duplex quantitative real-time PCR (qPCR) strategies on the basis of TaqMan probes for detection and quantification of parasitic loads in peripheral blood samples from Chagas disease patients. Two methods were studied: Satellite DNA (SatDNA) qPCR and kinetoplastid DNA (kDNA) qPCR. Both methods included an internal amplification control. Reportable range, analytical sensitivity, limits of detection and quantification, and precision were estimated according to international guidelines. In addition, inclusivity and exclusivity were estimated with DNA from stocks representing the different Trypanosoma cruzi discrete typing units and Trypanosoma rangeli and Leishmania spp. Both methods were challenged against 156 blood samples provided by the participant laboratories, including samples from acute and chronic patients with varied clinical findings, infected by oral route or vectorial transmission. kDNA qPCR showed better analytical sensitivity than SatDNA qPCR with limits of detection of 0.23 and 0.70 parasite equivalents/mL, respectively. Analyses of clinical samples revealed a high concordance in terms of sensitivity and parasitic loads determined by both SatDNA and kDNA qPCRs. This effort is a major step toward international validation of qPCR methods for the quantification of T. cruzi DNA in human blood samples, aiming to provide an accurate surrogate biomarker for diagnosis and treatment monitoring for patients with Chagas disease. PMID:26320872
An Image Analysis Algorithm for Malaria Parasite Stage Classification and Viability Quantification
Moon, Seunghyun; Lee, Sukjun; Kim, Heechang; Freitas-Junior, Lucio H.; Kang, Myungjoo; Ayong, Lawrence; Hansen, Michael A. E.
2013-01-01
With more than 40% of the world’s population at risk, 200–300 million infections each year, and an estimated 1.2 million deaths annually, malaria remains one of the most important public health problems of mankind today. With the propensity of malaria parasites to rapidly develop resistance to newly developed therapies, and the recent failures of artemisinin-based drugs in Southeast Asia, there is an urgent need for new antimalarial compounds with novel mechanisms of action to be developed against multidrug resistant malaria. We present here a novel image analysis algorithm for the quantitative detection and classification of Plasmodium lifecycle stages in culture as well as discriminating between viable and dead parasites in drug-treated samples. This new algorithm reliably estimates the number of red blood cells (isolated or clustered) per fluorescence image field, and accurately identifies parasitized erythrocytes on the basis of high intensity DAPI-stained parasite nuclei spots and Mitotracker-stained mitochondrial in viable parasites. We validated the performance of the algorithm by manual counting of the infected and non-infected red blood cells in multiple image fields, and the quantitative analyses of the different parasite stages (early rings, rings, trophozoites, schizonts) at various time-point post-merozoite invasion, in tightly synchronized cultures. Additionally, the developed algorithm provided parasitological effective concentration 50 (EC50) values for both chloroquine and artemisinin, that were similar to known growth inhibitory EC50 values for these compounds as determined using conventional SYBR Green I and lactate dehydrogenase-based assays. PMID:23626733
Global Precipitation Measurement (GPM) Ground Validation: Plans and Preparations
NASA Technical Reports Server (NTRS)
Schwaller, M.; Bidwell, S.; Durning, F. J.; Smith, E.
2004-01-01
The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meteorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept, the planning, and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays an important role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper outlines GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial p d temporal structure of the error and plans for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. This paper discusses NASA locations for GV measurements as well as anticipated locations from international GPM partners. NASA's primary locations for validation measurements are an oceanic site at Kwajalein Atoll in the Republic of the Marshall Islands and a continental site in north-central Oklahoma at the U.S. Department of Energy's Atmospheric Radiation Measurement Program site.
Preparations for Global Precipitation Measurement(GPM)Ground Validation
NASA Technical Reports Server (NTRS)
Bidwell, S. W.; Bibyk, I. K.; Duming, J. F.; Everett, D. F.; Smith, E. A.; Wolff, D. B.
2004-01-01
The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meterorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays a critical role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper describes GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial and temporal structure of the error. This paper describes the GPM program for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. GPM will ensure that information gained through Ground Validation is applied to future improvements in the spaceborne retrieval algorithms. This paper discusses the potential locations for validation measurement and research, the anticipated contributions of GPM's international partners, and the interaction of Ground Validation with other GPM program elements.
Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2018-06-11
We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.
Paige, Jeremy S.; Bernstein, Gregory S.; Heba, Elhamy; Costa, Eduardo A. C.; Fereirra, Marilia; Wolfson, Tanya; Gamst, Anthony C.; Valasek, Mark A.; Lin, Grace Y.; Han, Aiguo; Erdman, John W.; O’Brien, William D.; Andre, Michael P.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
OBJECTIVE The purpose of this study is to explore the diagnostic performance of two investigational quantitative ultrasound (QUS) parameters, attenuation coefficient and backscatter coefficient, in comparison with conventional ultrasound (CUS) and MRI-estimated proton density fat fraction (PDFF) for predicting histology-confirmed steatosis grade in adults with nonalcoholic fatty liver disease (NAFLD). SUBJECTS AND METHODS In this prospectively designed pilot study, 61 adults with histology-confirmed NAFLD were enrolled from September 2012 to February 2014. Subjects underwent QUS, CUS, and MRI examinations within 100 days of clinical-care liver biopsy. QUS parameters (attenuation coefficient and backscatter coefficient) were estimated using a reference phantom technique by two analysts independently. Three-point ordinal CUS scores intended to predict steatosis grade (1, 2, or 3) were generated independently by two radiologists on the basis of QUS features. PDFF was estimated using an advanced chemical shift–based MRI technique. Using histologic examination as the reference standard, ROC analysis was performed. Optimal attenuation coefficient, backscatter coefficient, and PDFF cutoff thresholds were identified, and the accuracy of attenuation coefficient, backscatter coefficient, PDFF, and CUS to predict steatosis grade was determined. Interobserver agreement for attenuation coefficient, backscatter coefficient, and CUS was analyzed. RESULTS CUS had 51.7% grading accuracy. The raw and cross-validated steatosis grading accuracies were 61.7% and 55.0%, respectively, for attenuation coefficient, 68.3% and 68.3% for backscatter coefficient, and 76.7% and 71.3% for MRI-estimated PDFF. Interobserver agreements were 53.3% for CUS (κ = 0.61), 90.0% for attenuation coefficient (κ = 0.87), and 71.7% for backscatter coefficient (κ = 0.82) (p < 0.0001 for all). CONCLUSION Preliminary observations suggest that QUS parameters may be more accurate and provide higher interobserver agreement than CUS for predicting hepatic steatosis grade in patients with NAFLD. PMID:28267360
Paige, Jeremy S; Bernstein, Gregory S; Heba, Elhamy; Costa, Eduardo A C; Fereirra, Marilia; Wolfson, Tanya; Gamst, Anthony C; Valasek, Mark A; Lin, Grace Y; Han, Aiguo; Erdman, John W; O'Brien, William D; Andre, Michael P; Loomba, Rohit; Sirlin, Claude B
2017-05-01
The purpose of this study is to explore the diagnostic performance of two investigational quantitative ultrasound (QUS) parameters, attenuation coefficient and backscatter coefficient, in comparison with conventional ultrasound (CUS) and MRI-estimated proton density fat fraction (PDFF) for predicting histology-confirmed steatosis grade in adults with nonalcoholic fatty liver disease (NAFLD). In this prospectively designed pilot study, 61 adults with histology-confirmed NAFLD were enrolled from September 2012 to February 2014. Subjects underwent QUS, CUS, and MRI examinations within 100 days of clinical-care liver biopsy. QUS parameters (attenuation coefficient and backscatter coefficient) were estimated using a reference phantom technique by two analysts independently. Three-point ordinal CUS scores intended to predict steatosis grade (1, 2, or 3) were generated independently by two radiologists on the basis of QUS features. PDFF was estimated using an advanced chemical shift-based MRI technique. Using histologic examination as the reference standard, ROC analysis was performed. Optimal attenuation coefficient, backscatter coefficient, and PDFF cutoff thresholds were identified, and the accuracy of attenuation coefficient, backscatter coefficient, PDFF, and CUS to predict steatosis grade was determined. Interobserver agreement for attenuation coefficient, backscatter coefficient, and CUS was analyzed. CUS had 51.7% grading accuracy. The raw and cross-validated steatosis grading accuracies were 61.7% and 55.0%, respectively, for attenuation coefficient, 68.3% and 68.3% for backscatter coefficient, and 76.7% and 71.3% for MRI-estimated PDFF. Interobserver agreements were 53.3% for CUS (κ = 0.61), 90.0% for attenuation coefficient (κ = 0.87), and 71.7% for backscatter coefficient (κ = 0.82) (p < 0.0001 for all). Preliminary observations suggest that QUS parameters may be more accurate and provide higher interobserver agreement than CUS for predicting hepatic steatosis grade in patients with NAFLD.
Estimation of methanogen biomass via quantitation of coenzyme M
Elias, Dwayne A.; Krumholz, Lee R.; Tanner, Ralph S.; Suflita, Joseph M.
1999-01-01
Determination of the role of methanogenic bacteria in an anaerobic ecosystem often requires quantitation of the organisms. Because of the extreme oxygen sensitivity of these organisms and the inherent limitations of cultural techniques, an accurate biomass value is very difficult to obtain. We standardized a simple method for estimating methanogen biomass in a variety of environmental matrices. In this procedure we used the thiol biomarker coenzyme M (CoM) (2-mercaptoethanesulfonic acid), which is known to be present in all methanogenic bacteria. A high-performance liquid chromatography-based method for detecting thiols in pore water (A. Vairavamurthy and M. Mopper, Anal. Chim. Acta 78:363–370, 1990) was modified in order to quantify CoM in pure cultures, sediments, and sewage water samples. The identity of the CoM derivative was verified by using liquid chromatography-mass spectroscopy. The assay was linear for CoM amounts ranging from 2 to 2,000 pmol, and the detection limit was 2 pmol of CoM/ml of sample. CoM was not adsorbed to sediments. The methanogens tested contained an average of 19.5 nmol of CoM/mg of protein and 0.39 ± 0.07 fmol of CoM/cell. Environmental samples contained an average of 0.41 ± 0.17 fmol/cell based on most-probable-number estimates. CoM was extracted by using 1% tri-(N)-butylphosphine in isopropanol. More than 90% of the CoM was recovered from pure cultures and environmental samples. We observed no interference from sediments in the CoM recovery process, and the method could be completed aerobically within 3 h. Freezing sediment samples resulted in 46 to 83% decreases in the amounts of detectable CoM, whereas freezing had no effect on the amounts of CoM determined in pure cultures. The method described here provides a quick and relatively simple way to estimate methanogenic biomass.
Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.
Obuchowski, Nancy A; Bullen, Jennifer
2017-01-01
Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.
Investigating the Validity of Two Widely Used Quantitative Text Tools
ERIC Educational Resources Information Center
Cunningham, James W.; Hiebert, Elfrieda H.; Mesmer, Heidi Anne
2018-01-01
In recent years, readability formulas have gained new prominence as a basis for selecting texts for learning and assessment. Variables that quantitative tools count (e.g., word frequency, sentence length) provide valid measures of text complexity insofar as they accurately predict representative and high-quality criteria. The longstanding…
Enterococci are frequently monitored in water samples as indicators of fecal pollution. Attention is now shifting from culture based methods for enumerating these organisms to more rapid molecular methods such as QPCR. Accurate quantitative analyses by this method requires highly...
Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.
Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan
2018-03-01
Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Zhang, Kun; Niu, Shaofang; Di, Dianping; Shi, Lindan; Liu, Deshui; Cao, Xiuling; Miao, Hongqin; Wang, Xianbing; Han, Chenggui; Yu, Jialin; Li, Dawei; Zhang, Yongliang
2013-10-10
Both genome-wide transcriptomic surveys of the mRNA expression profiles and virus-induced gene silencing-based molecular studies of target gene during virus-plant interaction involve the precise estimation of the transcript abundance. Quantitative real-time PCR (qPCR) is the most widely adopted technique for mRNA quantification. In order to obtain reliable quantification of transcripts, identification of the best reference genes forms the basis of the preliminary work. Nevertheless, the stability of internal controls in virus-infected monocots needs to be fully explored. In this work, the suitability of ten housekeeping genes (ACT, EF1α, FBOX, GAPDH, GTPB, PP2A, SAND, TUBβ, UBC18 and UK) for potential use as reference genes in qPCR were investigated in five different monocot plants (Brachypodium, barley, sorghum, wheat and maize) under infection with different viruses including Barley stripe mosaic virus (BSMV), Brome mosaic virus (BMV), Rice black-streaked dwarf virus (RBSDV) and Sugarcane mosaic virus (SCMV). By using three different algorithms, the most appropriate reference genes or their combinations were identified for different experimental sets and their effectiveness for the normalisation of expression studies were further validated by quantitative analysis of a well-studied PR-1 gene. These results facilitate the selection of desirable reference genes for more accurate gene expression studies in virus-infected monocots. Copyright © 2013 Elsevier B.V. All rights reserved.
Vilmin, Franck; Dussap, Claude; Coste, Nathalie
2006-06-01
In the tire industry, synthetic styrene-butadiene rubber (SBR), butadiene rubber (BR), and isoprene rubber (IR) elastomers are essential for conferring to the product its properties of grip and rolling resistance. Their physical properties depend on their chemical composition, i. e., their microstructure and styrene content, which must be accurately controlled. This paper describes a fast, robust, and highly reproducible near-infrared analytical method for the quantitative determination of the microstructure and styrene content. The quantitative models are calculated with the help of pure spectral profiles estimated from a partial least squares (PLS) regression, using (13)C nuclear magnetic resonance (NMR) as the reference method. This versatile approach allows the models to be applied over a large range of compositions, from a single BR to an SBR-IR blend. The resulting quantitative predictions are independent of the sample path length. As a consequence, the sample preparation is solvent free and simplified with a very fast (five minutes) hot filming step of a bulk polymer piece. No precise thickness control is required. Thus, the operator effect becomes negligible and the method is easily transferable. The root mean square error of prediction, depending on the rubber composition, is between 0.7% and 1.3%. The reproducibility standard error is less than 0.2% in every case.
Briolat, Emmanuelle Sophie; Zagrobelny, Mika; Olsen, Carl Erik; Blount, Jonathan D; Stevens, Martin
2018-05-16
The distinctive black and red wing pattern of six-spot burnet moths (Zygaena filipendulae, L.) is a classic example of aposematism, advertising their potent cyanide-based defences. While such warning signals provide a qualitatively honest signal of unprofitability, the evidence for quantitative honesty, whereby variation in visual traits could provide accurate estimates of individual toxicity, is more equivocal. Combining measures of cyanogenic glucoside content and wing colour from the perspective of avian predators, we investigate the relationship between coloration and defences in Z. filipendulae, to test signal honesty both within and across populations. There were no significant relationships between mean cyanogenic glucoside concentration and metrics of wing coloration across populations in males, yet in females higher cyanogenic glucoside levels were associated with smaller and lighter red forewing markings. Trends within populations were similarly inconsistent with quantitative honesty, and persistent differences between the sexes were apparent: larger females, carrying a greater total cyanogenic glucoside load, displayed larger but less conspicuous markings than smaller males, according to several colour metrics. The overall high aversiveness of cyanogenic glucosides and fluctuations in colour and toxin levels during an individual's lifetime may contribute to these results, highlighting generally important reasons why signal honesty should not always be expected in aposematic species. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
TAIWO, OLUWADAMILOLA O.; FINEGAN, DONAL P.; EASTWOOD, DAVID S.; FIFE, JULIE L.; BROWN, LEON D.; DARR, JAWWAD A.; LEE, PETER D.; BRETT, DANIEL J.L.
2016-01-01
Summary Lithium‐ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium‐ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3‐D imaging techniques, quantitative assessment of 3‐D microstructures from 2‐D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two‐dimensional (2‐D) data sets. In this study, stereological prediction and three‐dimensional (3‐D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium‐ion battery electrodes were imaged using synchrotron‐based X‐ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2‐D image sections generated from tomographic imaging, whereas direct 3‐D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2‐D image sections is bound to be associated with ambiguity and that volume‐based 3‐D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially‐dependent parameters, such as tortuosity and pore‐phase connectivity. PMID:26999804
Taiwo, Oluwadamilola O; Finegan, Donal P; Eastwood, David S; Fife, Julie L; Brown, Leon D; Darr, Jawwad A; Lee, Peter D; Brett, Daniel J L; Shearing, Paul R
2016-09-01
Lithium-ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium-ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3-D imaging techniques, quantitative assessment of 3-D microstructures from 2-D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two-dimensional (2-D) data sets. In this study, stereological prediction and three-dimensional (3-D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium-ion battery electrodes were imaged using synchrotron-based X-ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2-D image sections generated from tomographic imaging, whereas direct 3-D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2-D image sections is bound to be associated with ambiguity and that volume-based 3-D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially-dependent parameters, such as tortuosity and pore-phase connectivity. © 2016 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Li, Guo-Fu; Yu, Guo; Li, Yanfei; Zheng, Yi; Zheng, Qing-Shan; Derendorf, Hartmut
2018-07-01
Quantitative prediction of unbound drug fraction (f u ) is essential for scaling pharmacokinetics through physiologically based approaches. However, few attempts have been made to evaluate the projection of f u values under pathological conditions. The primary objective of this study was to predict f u values (n = 105) of 56 compounds with or without the information of predominant binding protein in patients with varying degrees of hepatic insufficiency by accounting for quantitative changes in molar concentrations of either the major binding protein or albumin plus alpha 1-acid glycoprotein associated with differing levels of hepatic dysfunction. For the purpose of scaling, data pertaining to albumin and α1-acid glycoprotein levels in response to differing degrees of hepatic impairment were systematically collected from 919 adult donors. The results of the present study demonstrate for the first time the feasibility of physiologically based scaling f u in hepatic dysfunction after verifying with experimentally measured data of a wide variety of compounds from individuals with varying degrees of hepatic insufficiency. Furthermore, the high level of predictive accuracy indicates that the inter-relation between the severity of hepatic impairment and these plasma protein levels are physiologically accurate. The present study enhances the confidence in predicting f u in hepatic insufficiency, particularly for albumin-bound drugs. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Wang, Shunhai; Bobst, Cedric E.; Kaltashov, Igor A.
2018-01-01
Transferrin (Tf) is an 80 kDa iron-binding protein which is viewed as a promising drug carrier to target the central nervous system due to its ability to penetrate the blood-brain barrier (BBB). Among the many challenges during the development of Tf-based therapeutics, sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult due to the presence of abundant endogenous Tf. Herein, we describe the development of a new LC-MS based method for sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous hTf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed O18-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation. PMID:26307718
Wicke, Jason; Dumas, Genevieve A
2010-02-01
The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).
NASA Astrophysics Data System (ADS)
Cifelli, R.; Chen, H.; Chandrasekar, V.
2017-12-01
A recent study by the State of California's Department of Water Resources has emphasized that the San Francisco Bay Area is at risk of catastrophic flooding. Therefore, accurate quantitative precipitation estimation (QPE) and forecast (QPF) are critical for protecting life and property in this region. Compared to rain gauge and meteorological satellite, ground based radar has shown great advantages for high-resolution precipitation observations in both space and time domain. In addition, the polarization diversity shows great potential to characterize precipitation microphysics through identification of different hydrometeor types and their size and shape information. Currently, all the radars comprising the U.S. National Weather Service (NWS) Weather Surveillance Radar-1988 Doppler (WSR-88D) network are operating in dual-polarization mode. Enhancement of QPE is one of the main considerations of the dual-polarization upgrade. The San Francisco Bay Area is covered by two S-band WSR-88D radars, namely, KMUX and KDAX. However, in complex terrain like the Bay Area, it is still challenging to obtain an optimal rainfall algorithm for a given set of dual-polarization measurements. In addition, the accuracy of rain rate estimates is contingent on additional factors such as bright band contamination, vertical profile of reflectivity (VPR) correction, and partial beam blockages. This presentation aims to improve radar QPE for the Bay area using advanced dual-polarization rainfall methodologies. The benefit brought by the dual-polarization upgrade of operational radar network is assessed. In addition, a pilot study of gap fill X-band radar performance is conducted in support of regional QPE system development. This paper also presents a detailed comparison between the dual-polarization radar-derived rainfall products with various operational products including the NSSL's Multi-Radar/Multi-Sensor (MRMS) system. Quantitative evaluation of various rainfall products is achieved using rainfall measurements from a validation gauge network, which shows that new dual-polarization methods can produce better QPE, and the X-band radar has excellent potential to augment WSR-88D for rainfall monitoring in this region.
Communication—Quantitative Voltammetric Analysis of High Concentration Actinides in Molten Salts
Hoyt, Nathaniel C.; Willit, James L.; Williamson, Mark A.
2017-01-18
Previous electroanalytical studies have shown that cyclic voltammetry can provide accurate quantitative measurements of actinide concentrations at low weight loadings in molten salts. However, above 2 wt%, the techniques were found to underpredict the concentrations of the reactant species. Here this work will demonstrate that much of the discrepancy is caused by uncompensated resistance and cylindrical diffusion. An improved electroanalytical approach has therefore been developed using the results of digital simulations to take these effects into account. This approach allows for accurate electroanalytical predictions across the full range of weight loadings expected to be encountered in operational nuclear fuel processingmore » equipment.« less
Communication—Quantitative Voltammetric Analysis of High Concentration Actinides in Molten Salts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyt, Nathaniel C.; Willit, James L.; Williamson, Mark A.
Previous electroanalytical studies have shown that cyclic voltammetry can provide accurate quantitative measurements of actinide concentrations at low weight loadings in molten salts. However, above 2 wt%, the techniques were found to underpredict the concentrations of the reactant species. Here this work will demonstrate that much of the discrepancy is caused by uncompensated resistance and cylindrical diffusion. An improved electroanalytical approach has therefore been developed using the results of digital simulations to take these effects into account. This approach allows for accurate electroanalytical predictions across the full range of weight loadings expected to be encountered in operational nuclear fuel processingmore » equipment.« less
Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill
2017-01-01
Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875
NASA Astrophysics Data System (ADS)
Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid
2015-07-01
The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.
NASA Astrophysics Data System (ADS)
Yuan, Wu; Kut, Carmen; Liang, Wenxuan; Li, Xingde
2017-03-01
Cancer is known to alter the local optical properties of tissues. The detection of OCT-based optical attenuation provides a quantitative method to efficiently differentiate cancer from non-cancer tissues. In particular, the intraoperative use of quantitative OCT is able to provide a direct visual guidance in real time for accurate identification of cancer tissues, especially these without any obvious structural layers, such as brain cancer. However, current methods are suboptimal in providing high-speed and accurate OCT attenuation mapping for intraoperative brain cancer detection. In this paper, we report a novel frequency-domain (FD) algorithm to enable robust and fast characterization of optical attenuation as derived from OCT intensity images. The performance of this FD algorithm was compared with traditional fitting methods by analyzing datasets containing images from freshly resected human brain cancer and from a silica phantom acquired by a 1310 nm swept-source OCT (SS-OCT) system. With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuation mapping algorithm can offer robust and accurate quantitative interpretation of OCT images in real time during brain surgery.
Estimation of laceration length by emergency department personnel.
Bourne, Christina L; Jenkins, M Adams; Brewer, Kori L
2014-11-01
Documentation and billing for laceration repair involves a description of wound length. We designed this study to test the hypothesis that emergency department (ED) personnel can accurately estimate wound lengths without the aid of a measuring device. This was a single-center prospective observational study performed in an academic ED. Seven wounds of varying lengths were simulated by creating lacerations on purchased pigs' ears and feet. We asked healthcare providers, defined as nurses and physicians working in the ED, to estimate the length of each wound by visual inspection. Length estimates were given in centimeters (cm) and inches. Estimated lengths were considered correct if the estimate was within 0.5 cm or 0.2 inches of the actual length. We calculated the differences between estimated and actual laceration lengths for each laceration and compared the accuracy of physicians to nurses using an unpaired t-test. Thirty-two physicians (nine faculty and 23 residents) and 16 nurses participated. All subjects tended to overestimate in cm and inches. Physicians were able to estimate laceration length within 0.5 cm 36% of the time and within 0.2 inches 29% of the time. Physicians were more accurate at estimating wound lengths than nurses in both cm and inches. Both physicians and nurses were more accurate at estimating shorter lengths (<5.0 cm) than longer (>5.0 cm). ED personnel are often unable to accurately estimate wound length in either cm or inches and tend to overestimate laceration lengths when based solely on visual inspection.
Estimating bark thicknesses of common Appalachian hardwoods
R. Edward Thomas; Neal D. Bennett
2014-01-01
Knowing the thickness of bark along the stem of a tree is critical to accurately estimate residue and, more importantly, estimate the volume of solid wood available. Determining the volume or weight of bark for a log is important because bark and wood mass are typically separated while processing logs, and accurate determination of volume is problematic. Bark thickness...
TOXNET: Toxicology Data Network
... 4. Supporting Data for Carcinogenicity Expand II.B. Quantitative Estimate of Carcinogenic Risk from Oral Exposure II. ... of Confidence (Carcinogenicity, Oral Exposure) Expand II.C. Quantitative Estimate of Carcinogenic Risk from Inhalation Exposure II. ...
Mapping quantitative trait loci for binary trait in the F2:3 design.
Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang
2008-12-01
In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.
Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A
2018-05-01
Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.
Plainchont, Bertrand; Pitoux, Daisy; Cyrille, Mathieu; Giraud, Nicolas
2018-02-06
We propose an original concept to measure accurately enantiomeric excesses on proton NMR spectra, which combines high-resolution techniques based on a spatial encoding of the sample, with the use of optically active weakly orienting solvents. We show that it is possible to simulate accurately dipolar edited spectra of enantiomers dissolved in a chiral liquid crystalline phase, and to use these simulations to calibrate integrations that can be measured on experimental data, in order to perform a quantitative chiral analysis. This approach is demonstrated on a chemical intermediate for which optical purity is an essential criterion. We find that there is a very good correlation between the experimental and calculated integration ratios extracted from G-SERF spectra, which paves the way to a general method of determination of enantiomeric excesses based on the observation of 1 H nuclei.
NASA Astrophysics Data System (ADS)
Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA
2018-03-01
The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
GFR Estimation: From Physiology to Public Health
Levey, Andrew S.; Inker, Lesley A.; Coresh, Josef
2014-01-01
Estimating glomerular filtration rate (GFR) is essential for clinical practice, research, and public health. Appropriate interpretation of estimated GFR (eGFR) requires understanding the principles of physiology, laboratory medicine, epidemiology and biostatistics used in the development and validation of GFR estimating equations. Equations developed in diverse populations are less biased at higher GFR than equations developed in CKD populations and are more appropriate for general use. Equations that include multiple endogenous filtration markers are more precise than equations including a single filtration marker. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations are the most accurate GFR estimating equations that have been evaluated in large, diverse populations and are applicable for general clinical use. The 2009 CKD-EPI creatinine equation is more accurate in estimating GFR and prognosis than the 2006 Modification of Diet in Renal Disease (MDRD) Study equation and provides lower estimates of prevalence of decreased eGFR. It is useful as a “first” test for decreased eGFR and should replace the MDRD Study equation for routine reporting of serum creatinine–based eGFR by clinical laboratories. The 2012 CKD-EPI cystatin C equation is as accurate as the 2009 CKD-EPI creatinine equation in estimating eGFR, does not require specification of race, and may be more accurate in patients with decreased muscle mass. The 2012 CKD-EPI creatinine–cystatin C equation is more accurate than the 2009 CKD-EPI creatinine and 2012 CKD-EPI cystatin C equations and is useful as a confirmatory test for decreased eGFR as determined by an equation based on serum creatinine. Further improvement in GFR estimating equations will require development in more broadly representative populations, including diverse racial and ethnic groups, use of multiple filtration markers, and evaluation using statistical techniques to compare eGFR to “true GFR”. PMID:24485147
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moirano, J
Purpose: An accurate dose estimate is necessary for effective patient management after a fetal exposure. In the case of a high-dose exposure, it is critical to use all resources available in order to make the most accurate assessment of the fetal dose. This work will demonstrate a methodology for accurate fetal dose estimation using tools that have recently become available in many clinics, and show examples of best practices for collecting data and performing the fetal dose calculation. Methods: A fetal dose estimate calculation was performed using modern data collection tools to determine parameters for the calculation. The reference pointmore » air kerma as displayed by the fluoroscopic system was checked for accuracy. A cumulative dose incidence map and DICOM header mining were used to determine the displayed reference point air kerma. Corrections for attenuation caused by the patient table and pad were measured and applied in order to determine the peak skin dose. The position and depth of the fetus was determined by ultrasound imaging and consultation with a radiologist. The data collected was used to determine a normalized uterus dose from Monte Carlo simulation data. Fetal dose values from this process were compared to other accepted calculation methods. Results: An accurate high-dose fetal dose estimate was made. Comparison to accepted legacy methods were were within 35% of estimated values. Conclusion: Modern data collection and reporting methods ease the process for estimation of fetal dose from interventional fluoroscopy exposures. Many aspects of the calculation can now be quantified rather than estimated, which should allow for a more accurate estimation of fetal dose.« less
Tipirneni-Sajja, Aaryani; Krafft, Axel J; McCarville, M Beth; Loeffler, Ralf B; Song, Ruitian; Hankins, Jane S; Hillenbrand, Claudia M
2017-07-01
The objective of this study is to evaluate radial free-breathing (FB) multiecho ultrashort TE (UTE) imaging as an alternative to Cartesian FB multiecho gradient-recalled echo (GRE) imaging for quantitative assessment of hepatic iron content (HIC) in sedated patients and subjects unable to perform breath-hold (BH) maneuvers. FB multiecho GRE imaging and FB multiecho UTE imaging were conducted for 46 test group patients with iron overload who could not complete BH maneuvers (38 patients were sedated, and eight were not sedated) and 16 control patients who could complete BH maneuvers. Control patients also underwent standard BH multiecho GRE imaging. Quantitative R2* maps were calculated, and mean liver R2* values and coefficients of variation (CVs) for different acquisitions and patient groups were compared using statistical analysis. FB multiecho GRE images displayed motion artifacts and significantly lower R2* values, compared with standard BH multiecho GRE images and FB multiecho UTE images in the control cohort and FB multiecho UTE images in the test cohort. In contrast, FB multiecho UTE images produced artifact-free R2* maps, and mean R2* values were not significantly different from those measured by BH multiecho GRE imaging. Motion artifacts on FB multiecho GRE images resulted in an R2* CV that was approximately twofold higher than the R2* CV from BH multiecho GRE imaging and FB multiecho UTE imaging. The R2* CV was relatively constant over the range of R2* values for FB multiecho UTE, but it increased with increases in R2* for FB multiecho GRE imaging, reflecting that motion artifacts had a stronger impact on R2* estimation with increasing iron burden. FB multiecho UTE imaging was less motion sensitive because of radial sampling, produced excellent image quality, and yielded accurate R2* estimates within the same acquisition time used for multiaveraged FB multiecho GRE imaging. Thus, FB multiecho UTE imaging is a viable alternative for accurate HIC assessment in sedated children and patients who cannot complete BH maneuvers.
Moroz, Brian E; Beck, Harold L; Bouville, André; Simon, Steven L
2010-08-01
The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly where little historical fallout monitoring data are available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particle sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that, when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of simulations of the deposition of fallout from atmospheric nuclear tests conducted in the Marshall Islands (mid-Pacific), at the Nevada Test Site (U.S.), and at the Semipalatinsk Nuclear Test Site (Kazakhstan) were performed. The results of the Marshall Islands simulations were used in a limited fashion to support the dose reconstruction described in companion papers within this volume.
Moroz, Brian E.; Beck, Harold L.; Bouville, André; Simon, Steven L.
2013-01-01
The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly, where little historical fallout monitoring data is available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particles sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of simulations of the deposition of fallout from atmospheric nuclear tests conducted in the Marshall Islands, at the Nevada Test Site (USA), and at the Semipalatinsk Nuclear Test Site (Kazakhstan) were performed using reanalysis data composed of historic meteorological observations. The results of the Marshall Islands simulations were used in a limited fashion to support the dose reconstruction described in companion papers within this volume. PMID:20622555
Quantitation of small intestinal permeability during normal human drug absorption
2013-01-01
Background Understanding the quantitative relationship between a drug’s physical chemical properties and its rate of intestinal absorption (QSAR) is critical for selecting candidate drugs. Because of limited experimental human small intestinal permeability data, approximate surrogates such as the fraction absorbed or Caco-2 permeability are used, both of which have limitations. Methods Given the blood concentration following an oral and intravenous dose, the time course of intestinal absorption in humans was determined by deconvolution and related to the intestinal permeability by the use of a new 3 parameter model function (“Averaged Model” (AM)). The theoretical validity of this AM model was evaluated by comparing it to the standard diffusion-convection model (DC). This analysis was applied to 90 drugs using previously published data. Only drugs that were administered in oral solution form to fasting subjects were considered so that the rate of gastric emptying was approximately known. All the calculations are carried out using the freely available routine PKQuest Java (http://www.pkquest.com) which has an easy to use, simple interface. Results Theoretically, the AM permeability provides an accurate estimate of the intestinal DC permeability for solutes whose absorption ranges from 1% to 99%. The experimental human AM permeabilities determined by deconvolution are similar to those determined by direct human jejunal perfusion. The small intestinal pH varies with position and the results are interpreted in terms of the pH dependent octanol partition. The permeability versus partition relations are presented separately for the uncharged, basic, acidic and charged solutes. The small uncharged solutes caffeine, acetaminophen and antipyrine have very high permeabilities (about 20 x 10-4 cm/sec) corresponding to an unstirred layer of only 45 μm. The weak acid aspirin also has a large AM permeability despite its low octanol partition at pH 7.4, suggesting that it is nearly completely absorbed in the first part of the intestine where the pH is about 5.4. Conclusions The AM deconvolution method provides an accurate estimate of the human intestinal permeability. The results for these 90 drugs should provide a useful benchmark for evaluating QSAR models. PMID:23800230
Loescher, Christine M; Morton, David W; Razic, Slavica; Agatonovic-Kustrin, Snezana
2014-09-01
Chromatography techniques such as HPTLC and HPLC are commonly used to produce a chemical fingerprint of a plant to allow identification and quantify the main constituents within the plant. The aims of this study were to compare HPTLC and HPLC, for qualitative and quantitative analysis of the major constituents of Calendula officinalis and to investigate the effect of different extraction techniques on the C. officinalis extract composition from different parts of the plant. The results found HPTLC to be effective for qualitative analysis, however, HPLC was found to be more accurate for quantitative analysis. A combination of the two methods may be useful in a quality control setting as it would allow rapid qualitative analysis of herbal material while maintaining accurate quantification of extract composition. Copyright © 2014 Elsevier B.V. All rights reserved.
[Doppler echocardiography of tricuspid insufficiency. Methods of quantification].
Loubeyre, C; Tribouilloy, C; Adam, M C; Mirode, A; Trojette, F; Lesbre, J P
1994-01-01
Evaluation of tricuspid incompetence has benefitted considerably from the development of Doppler ultrasound. In addition to direct analysis of the valves, which provides information about the mechanism involved, this method is able to provide an accurate evaluation, mainly through use of the Doppler mode. In addition to new criteria being evaluated (mainly the convergence zone of the regurgitant jet), some indices are recognised as good quantitative parameters: extension of the regurgitant jet into the right atrium, anterograde tricuspid flow, laminar nature of the regurgitant flow, analysis of the flow in the supra-hepatic veins, this is only semi-quantitative, since the calculation of the regurgitation fraction from the pulsed Doppler does not seem to be reliable; This accurate semi-quantitative evaluation is made possible by careful and consistent use of all the criteria available. The authors set out to discuss the value of the various evaluation criteria mentioned in the literature and try to define a practical approach.
Finding the bottom and using it
Sandoval, Ruben M.; Wang, Exing; Molitoris, Bruce A.
2014-01-01
Maximizing 2-photon parameters used in acquiring images for quantitative intravital microscopy, especially when high sensitivity is required, remains an open area of investigation. Here we present data on correctly setting the black level of the photomultiplier tube amplifier by adjusting the offset to allow for accurate quantitation of low intensity processes. When the black level is set too high some low intensity pixel values become zero and a nonlinear degradation in sensitivity occurs rendering otherwise quantifiable low intensity values virtually undetectable. Initial studies using a series of increasing offsets for a sequence of concentrations of fluorescent albumin in vitro revealed a loss of sensitivity for higher offsets at lower albumin concentrations. A similar decrease in sensitivity, and therefore the ability to correctly determine the glomerular permeability coefficient of albumin, occurred in vivo at higher offset. Finding the offset that yields accurate and linear data are essential for quantitative analysis when high sensitivity is required. PMID:25313346
Quantitative fluorescence tomography using a trimodality system: in vivo validation
Lin, Yuting; Barber, William C.; Iwanczyk, Jan S.; Roeck, Werner W.; Nalcioglu, Orhan; Gulsen, Gultekin
2010-01-01
A fully integrated trimodality fluorescence, diffuse optical, and x-ray computed tomography (FT∕DOT∕XCT) system for small animal imaging is reported in this work. The main purpose of this system is to obtain quantitatively accurate fluorescence concentration images using a multimodality approach. XCT offers anatomical information, while DOT provides the necessary background optical property map to improve FT image accuracy. The quantitative accuracy of this trimodality system is demonstrated in vivo. In particular, we show that a 2-mm-diam fluorescence inclusion located 8 mm deep in a nude mouse can only be localized when functional a priori information from DOT is available. However, the error in the recovered fluorophore concentration is nearly 87%. On the other hand, the fluorophore concentration can be accurately recovered within 2% error when both DOT functional and XCT structural a priori information are utilized together to guide and constrain the FT reconstruction algorithm. PMID:20799770
An automated method of tuning an attitude estimator
NASA Technical Reports Server (NTRS)
Mason, Paul A. C.; Mook, D. Joseph
1995-01-01
Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.
Tao, Qian; Milles, Julien; Zeppenfeld, Katja; Lamb, Hildo J; Bax, Jeroen J; Reiber, Johan H C; van der Geest, Rob J
2010-08-01
Accurate assessment of the size and distribution of a myocardial infarction (MI) from late gadolinium enhancement (LGE) MRI is of significant prognostic value for postinfarction patients. In this paper, an automatic MI identification method combining both intensity and spatial information is presented in a clear framework of (i) initialization, (ii) false acceptance removal, and (iii) false rejection removal. The method was validated on LGE MR images of 20 chronic postinfarction patients, using manually traced MI contours from two independent observers as reference. Good agreement was observed between automatic and manual MI identification. Validation results showed that the average Dice indices, which describe the percentage of overlap between two regions, were 0.83 +/- 0.07 and 0.79 +/- 0.08 between the automatic identification and the manual tracing from observer 1 and observer 2, and the errors in estimated infarct percentage were 0.0 +/- 1.9% and 3.8 +/- 4.7% compared with observer 1 and observer 2. The difference between the automatic method and manual tracing is in the order of interobserver variation. In conclusion, the developed automatic method is accurate and robust in MI delineation, providing an objective tool for quantitative assessment of MI in LGE MR imaging.
A quantitative study of the clustering of polycyclic aromatic hydrocarbons at high temperatures.
Totton, Tim S; Misquitta, Alston J; Kraft, Markus
2012-03-28
The clustering of polycyclic aromatic hydrocarbon (PAH) molecules is investigated in the context of soot particle inception and growth using an isotropic potential developed from the benchmark PAHAP potential. This potential is used to estimate equilibrium constants of dimerisation for five representative PAH molecules based on a statistical mechanics model. Molecular dynamics simulations are also performed to study the clustering of homomolecular systems at a range of temperatures. The results from both sets of calculations demonstrate that at flame temperatures pyrene (C(16)H(10)) dimerisation cannot be a key step in soot particle formation and that much larger molecules (e.g. circumcoronene, C(54)H(18)) are required to form small clusters at flame temperatures. The importance of using accurate descriptions of the intermolecular interactions is demonstrated by comparing results to those calculated with a popular literature potential with an order of magnitude variation in the level of clustering observed. By using an accurate intermolecular potential we are able to show that physical binding of PAH molecules based on van der Waals interactions alone can only be a viable soot inception mechanism if concentrations of large PAH molecules are significantly higher than currently thought.
NASA Astrophysics Data System (ADS)
Brahmi, Djamel; Cassoux, Nathalie; Serruys, Camille; Giron, Alain; Lehoang, Phuc; Fertil, Bernard
1999-05-01
To support ophthalmologists in their daily routine and enable the quantitative assessment of progression of Cytomegalovirus infection as observed on series of retinal angiograms, a methodology allowing an accurate comparison of retinal borders has been developed. In order to evaluate accuracy of borders, ophthalmologists have been asked to repeatedly outline boundaries between infected and noninfected areas. As a matter of fact, accuracy of drawing relies on local features such as contrast, quality of image, background..., all factors which make the boundaries more or less perceptible from one part of an image to another. In order to directly estimate accuracy of retinal border from image analysis, an artificial neural network (a succession of unsupervised and supervised neural networks) has been designed to correlate accuracy of drawing (as calculated form ophthalmologists' hand-outlines) with local features of the underlying image. Our method has been applied to the quantification of CMV retinitis. It is shown that accuracy of border is properly predicted and characterized by a confident envelope that allows, after a registration phase based on fixed landmarks such as vessel forks, to accurately assess the evolution of CMV infection.
Torrecilha, Rafaela Beatriz Pintor; Utsunomiya, Yuri Tani; Batista, Luís Fábio da Silva; Bosco, Anelise Maria; Nunes, Cáris Maroni; Ciarlini, Paulo César; Laurenti, Márcia Dalastra
2017-01-30
Quantification of Leishmania infantum load via real-time quantitative polymerase chain reaction (qPCR) in lymph node aspirates is an accurate tool for diagnostics, surveillance and therapeutics follow-up in dogs with leishmaniasis. However, qPCR requires infrastructure and technical training that is not always available commercially or in public services. Here, we used a machine learning technique, namely Radial Basis Artificial Neural Network, to assess whether parasite load could be learned from clinical data (serological test, biochemical markers and physical signs). By comparing 18 different combinations of input clinical data, we found that parasite load can be accurately predicted using a relatively small reference set of 35 naturally infected dogs and 20 controls. In the best case scenario (use of all clinical data), predictions presented no bias or inflation and an accuracy (i.e., correlation between true and predicted values) of 0.869, corresponding to an average error of ±38.2 parasites per unit of volume. We conclude that reasonable estimates of L. infantum load from lymph node aspirates can be obtained from clinical records when qPCR services are not available. Copyright © 2016 Elsevier B.V. All rights reserved.
Calculating forces on thin flat plates with incomplete vorticity-field data
NASA Astrophysics Data System (ADS)
Limacher, Eric; Morton, Chris; Wood, David
2016-11-01
Optical experimental techniques such as particle image velocimetry (PIV) permit detailed quantification of velocities in the wakes of bluff bodies. Patterns in the wake development are significant to force generation, but it is not trivial to quantitatively relate changes in the wake to changes in measured forces. Key difficulties in this regard include: (i) accurate quantification of velocities close to the body, and (ii) the effect of missing velocity or vorticity data in regions where optical access is obscured. In the present work, we consider force formulations based on the vorticity field, wherein mathematical manipulation eliminates the need for accurate near-body velocity information. Attention is restricted to nominally two dimensional problems, namely (i) a linearly accelerating flat plate, investigated using PIV in a water tunnel, and (ii) a pitching plate in a freestream flow, as investigated numerically by Wang & Eldredge (2013). The effect of missing vorticity data on the pressure side of the plate has a significant impact on the calculation of force for the pitching plate test case. Fortunately, if the vorticity on the pressure side remains confined to a thin boundary layer, simple corrections can be applied to recover a force estimate.
Skeletal assessment with finite element analysis: relevance, pitfalls and interpretation.
Campbell, Graeme Michael; Glüer, Claus-C
2017-07-01
Finite element models simulate the mechanical response of bone under load, enabling noninvasive assessment of strength. Models generated from quantitative computed tomography (QCT) incorporate the geometry and spatial distribution of bone mineral density (BMD) to simulate physiological and traumatic loads as well as orthopaedic implant behaviour. The present review discusses the current strengths and weakness of finite element models for application to skeletal biomechanics. In cadaver studies, finite element models provide better estimations of strength compared to BMD. Data from clinical studies are encouraging; however, the superiority of finite element models over BMD measures for fracture prediction has not been shown conclusively, and may be sex and site dependent. Therapeutic effects on bone strength are larger than for BMD; however, model validation has only been performed on untreated bone. High-resolution modalities and novel image processing methods may enhance the structural representation and predictive ability. Despite extensive use of finite element models to study orthopaedic implant stability, accurate simulation of the bone-implant interface and fracture progression remains a significant challenge. Skeletal finite element models provide noninvasive assessments of strength and implant stability. Improved structural representation and implant surface interaction may enable more accurate models of fragility in the future.
Song, Lei; Gao, Jungang; Wang, Sheng; Hu, Huasi; Guo, Youmin
2017-01-01
Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.
Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.
Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen
2008-02-01
A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.
High Resolution, Large Deformation 3D Traction Force Microscopy
López-Fagundo, Cristina; Reichner, Jonathan; Hoffman-Kim, Diane; Franck, Christian
2014-01-01
Traction Force Microscopy (TFM) is a powerful approach for quantifying cell-material interactions that over the last two decades has contributed significantly to our understanding of cellular mechanosensing and mechanotransduction. In addition, recent advances in three-dimensional (3D) imaging and traction force analysis (3D TFM) have highlighted the significance of the third dimension in influencing various cellular processes. Yet irrespective of dimensionality, almost all TFM approaches have relied on a linear elastic theory framework to calculate cell surface tractions. Here we present a new high resolution 3D TFM algorithm which utilizes a large deformation formulation to quantify cellular displacement fields with unprecedented resolution. The results feature some of the first experimental evidence that cells are indeed capable of exerting large material deformations, which require the formulation of a new theoretical TFM framework to accurately calculate the traction forces. Based on our previous 3D TFM technique, we reformulate our approach to accurately account for large material deformation and quantitatively contrast and compare both linear and large deformation frameworks as a function of the applied cell deformation. Particular attention is paid in estimating the accuracy penalty associated with utilizing a traditional linear elastic approach in the presence of large deformation gradients. PMID:24740435
Continent-wide survey reveals massive decline in African savannah elephants
Schlossberg, Scott; Griffin, Curtice R.; Bouché, Philippe J.C.; Djene, Sintayehu W.; Elkan, Paul W.; Ferreira, Sam; Grossman, Falk; Kohi, Edward Mtarima; Landen, Kelly; Omondi, Patrick; Peltier, Alexis; Selier, S.A. Jeanetta; Sutcliffe, Robert
2016-01-01
African elephants (Loxodonta africana) are imperiled by poaching and habitat loss. Despite global attention to the plight of elephants, their population sizes and trends are uncertain or unknown over much of Africa. To conserve this iconic species, conservationists need timely, accurate data on elephant populations. Here, we report the results of the Great Elephant Census (GEC), the first continent-wide, standardized survey of African savannah elephants. We also provide the first quantitative model of elephant population trends across Africa. We estimated a population of 352,271 savannah elephants on study sites in 18 countries, representing approximately 93% of all savannah elephants in those countries. Elephant populations in survey areas with historical data decreased by an estimated 144,000 from 2007 to 2014, and populations are currently shrinking by 8% per year continent-wide, primarily due to poaching. Though 84% of elephants occurred in protected areas, many protected areas had carcass ratios that indicated high levels of elephant mortality. Results of the GEC show the necessity of action to end the African elephants’ downward trajectory by preventing poaching and protecting habitat. PMID:27635327
Continent-wide survey reveals massive decline in African savannah elephants.
Chase, Michael J; Schlossberg, Scott; Griffin, Curtice R; Bouché, Philippe J C; Djene, Sintayehu W; Elkan, Paul W; Ferreira, Sam; Grossman, Falk; Kohi, Edward Mtarima; Landen, Kelly; Omondi, Patrick; Peltier, Alexis; Selier, S A Jeanetta; Sutcliffe, Robert
2016-01-01
African elephants (Loxodonta africana) are imperiled by poaching and habitat loss. Despite global attention to the plight of elephants, their population sizes and trends are uncertain or unknown over much of Africa. To conserve this iconic species, conservationists need timely, accurate data on elephant populations. Here, we report the results of the Great Elephant Census (GEC), the first continent-wide, standardized survey of African savannah elephants. We also provide the first quantitative model of elephant population trends across Africa. We estimated a population of 352,271 savannah elephants on study sites in 18 countries, representing approximately 93% of all savannah elephants in those countries. Elephant populations in survey areas with historical data decreased by an estimated 144,000 from 2007 to 2014, and populations are currently shrinking by 8% per year continent-wide, primarily due to poaching. Though 84% of elephants occurred in protected areas, many protected areas had carcass ratios that indicated high levels of elephant mortality. Results of the GEC show the necessity of action to end the African elephants' downward trajectory by preventing poaching and protecting habitat.
Baston, David S; Denison, Michael S
2011-02-15
The chemically activated luciferase expression (CALUX) system is a mechanistically based recombinant luciferase reporter gene cell bioassay used in combination with chemical extraction and clean-up methods for the detection and relative quantitation of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like halogenated aromatic hydrocarbons in a wide variety of sample matrices. While sample extracts containing complex mixtures of chemicals can produce a variety of distinct concentration-dependent luciferase induction responses in CALUX cells, these effects are produced through a common mechanism of action (i.e. the Ah receptor (AhR)) allowing normalization of results and sample potency determination. Here we describe the diversity in CALUX response to PCDD/Fs from sediment and soil extracts and not only report the occurrence of superinduction of the CALUX bioassay, but we describe a mechanistically based approach for normalization of superinduction data that results in a more accurate estimation of the relative potency of such sample extracts. Copyright © 2010 Elsevier B.V. All rights reserved.
Yin, Xinyou
2013-01-01
Background Process-based ecophysiological crop models are pivotal in assessing responses of crop productivity and designing strategies of adaptation to climate change. Most existing crop models generally over-estimate the effect of elevated atmospheric [CO2], despite decades of experimental research on crop growth response to [CO2]. Analysis A review of the literature indicates that the quantitative relationships for a number of traits, once expressed as a function of internal plant nitrogen status, are altered little by the elevated [CO2]. A model incorporating these nitrogen-based functional relationships and mechanisms simulated photosynthetic acclimation to elevated [CO2], thereby reducing the chance of over-estimating crop response to [CO2]. Robust crop models to have small parameterization requirements and yet generate phenotypic plasticity under changing environmental conditions need to capture the carbon–nitrogen interactions during crop growth. Conclusions The performance of the improved models depends little on the type of the experimental facilities used to obtain data for parameterization, and allows accurate projections of the impact of elevated [CO2] and other climatic variables on crop productivity. PMID:23388883
USDA-ARS?s Scientific Manuscript database
Quantitative real-time polymerase chain reaction (qRT-PCR) is a commonly used technique for measuring gene expression levels due to its simplicity, specificity, and sensitivity. Reliable reference selection for the accurate quantification of gene expression under various experimental conditions is a...
Winfree, Seth; Dagher, Pierre C; Dunn, Kenneth W; Eadon, Michael T; Ferkowicz, Michael; Barwinska, Daria; Kelly, Katherine J; Sutton, Timothy A; El-Achkar, Tarek M
2018-06-05
Kidney biopsy remains the gold standard for uncovering the pathogenesis of acute and chronic kidney diseases. However, the ability to perform high resolution, quantitative, molecular and cellular interrogation of this precious tissue is still at a developing stage compared to other fields such as oncology. Here, we discuss recent advances in performing large-scale, three-dimensional (3D), multi-fluorescence imaging of kidney biopsies and quantitative analysis referred to as 3D tissue cytometry. This approach allows the accurate measurement of specific cell types and their spatial distribution in a thick section spanning the entire length of the biopsy. By uncovering specific disease signatures, including rare occurrences, and linking them to the biology in situ, this approach will enhance our understanding of disease pathogenesis. Furthermore, by providing accurate quantitation of cellular events, 3D cytometry may improve the accuracy of prognosticating the clinical course and response to therapy. Therefore, large-scale 3D imaging and cytometry of kidney biopsy is poised to become a bridge towards personalized medicine for patients with kidney disease. © 2018 S. Karger AG, Basel.
Wagner, Rebecca; Wetzel, Stephanie J; Kern, John; Kingston, H M Skip
2012-02-01
The employment of chemical weapons by rogue states and/or terrorist organizations is an ongoing concern in the United States. The quantitative analysis of nerve agents must be rapid and reliable for use in the private and public sectors. Current methods describe a tedious and time-consuming derivatization for gas chromatography-mass spectrometry and liquid chromatography in tandem with mass spectrometry. Two solid-phase extraction (SPE) techniques for the analysis of glyphosate and methylphosphonic acid are described with the utilization of isotopically enriched analytes for quantitation via atmospheric pressure chemical ionization-quadrupole time-of-flight mass spectrometry (APCI-Q-TOF-MS) that does not require derivatization. Solid-phase extraction-isotope dilution mass spectrometry (SPE-IDMS) involves pre-equilibration of a naturally occurring sample with an isotopically enriched standard. The second extraction method, i-Spike, involves loading an isotopically enriched standard onto the SPE column before the naturally occurring sample. The sample and the spike are then co-eluted from the column enabling precise and accurate quantitation via IDMS. The SPE methods in conjunction with IDMS eliminate concerns of incomplete elution, matrix and sorbent effects, and MS drift. For accurate quantitation with IDMS, the isotopic contribution of all atoms in the target molecule must be statistically taken into account. This paper describes two newly developed sample preparation techniques for the analysis of nerve agent surrogates in drinking water as well as statistical probability analysis for proper molecular IDMS. The methods described in this paper demonstrate accurate molecular IDMS using APCI-Q-TOF-MS with limits of quantitation as low as 0.400 mg/kg for glyphosate and 0.031 mg/kg for methylphosphonic acid. Copyright © 2012 John Wiley & Sons, Ltd.
Quantitative analysis of rib movement based on dynamic chest bone images: preliminary results
NASA Astrophysics Data System (ADS)
Tanaka, R.; Sanada, S.; Oda, M.; Mitsutaka, M.; Suzuki, K.; Sakuta, K.; Kawashima, H.
2014-03-01
Rib movement during respiration is one of the diagnostic criteria in pulmonary impairments. In general, the rib movement is assessed in fluoroscopy. However, the shadows of lung vessels and bronchi overlapping ribs prevent accurate quantitative analysis of rib movement. Recently, an image-processing technique for separating bones from soft tissue in static chest radiographs, called "bone suppression technique", has been developed. Our purpose in this study was to evaluate the usefulness of dynamic bone images created by the bone suppression technique in quantitative analysis of rib movement. Dynamic chest radiographs of 10 patients were obtained using a dynamic flat-panel detector (FPD). Bone suppression technique based on a massive-training artificial neural network (MTANN) was applied to the dynamic chest images to create bone images. Velocity vectors were measured in local areas on the dynamic bone images, which formed a map. The velocity maps obtained with bone and original images for scoliosis and normal cases were compared to assess the advantages of bone images. With dynamic bone images, we were able to quantify and distinguish movements of ribs from those of other lung structures accurately. Limited rib movements of scoliosis patients appeared as reduced rib velocity vectors. Vector maps in all normal cases exhibited left-right symmetric distributions, whereas those in abnormal cases showed nonuniform distributions. In conclusion, dynamic bone images were useful for accurate quantitative analysis of rib movements: Limited rib movements were indicated as a reduction of rib movement and left-right asymmetric distribution on vector maps. Thus, dynamic bone images can be a new diagnostic tool for quantitative analysis of rib movements without additional radiation dose.
Robert E. Keane; Laura J. Dickinson
2007-01-01
Fire managers need better estimates of fuel loading so they can more accurately predict the potential fire behavior and effects of alternative fuel and ecosystem restoration treatments. This report presents a new fuel sampling method, called the photoload sampling technique, to quickly and accurately estimate loadings for six common surface fuel components (1 hr, 10 hr...
On sweat analysis for quantitative estimation of dehydration during physical exercise.
Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M
2015-08-01
Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.
Austin, Jehannine C.; Hippman, Catriona; Honer, William G.
2013-01-01
Studies show that individuals with psychotic illnesses and their families want information about psychosis risks for other relatives. However, deriving accurate numeric probabilities for psychosis risk is challenging, and people have difficulty interpreting probabilistic information, thus some have suggested that clinicians should use risk descriptors, such as ‘moderate’ or ‘quite high’, rather than numbers. Little is known about how individuals with psychosis and their family members use quantitative and qualitative descriptors of risk in the specific context of chance for an individual to develop psychosis. We explored numeric and descriptive estimations of psychosis risk among individuals with psychotic disorders and unaffected first-degree relatives. In an online survey, respondents numerically and descriptively estimated risk for an individual to develop psychosis in scenarios where they had: A) no affected family members; and B) an affected sibling. 219 affected individuals and 211 first-degree relatives participated. Affected individuals estimated significantly higher risks than relatives. Participants attributed all descriptors between “very low” and “very high” to probabilities of 1%, 10%, 25% and 50%+. For a given numeric probability, different risk descriptors were attributed in different scenarios. Clinically, brief interventions around risk (using either probabilities or descriptors alone) are vulnerable to miscommunication and potentially profoundly negative consequences –interventions around risk are best suited to in-depth discussion. PMID:22421074
Austin, Jehannine C; Hippman, Catriona; Honer, William G
2012-03-30
Studies show that individuals with psychotic illnesses and their families want information about psychosis risks for other relatives. However, deriving accurate numeric probabilities for psychosis risk is challenging, and people have difficulty interpreting probabilistic information; thus, some have suggested that clinicians should use risk descriptors, such as "moderate" or "quite high", rather than numbers. Little is known about how individuals with psychosis and their family members use quantitative and qualitative descriptors of risk in the specific context of chance for an individual to develop psychosis. We explored numeric and descriptive estimations of psychosis risk among individuals with psychotic disorders and unaffected first-degree relatives. In an online survey, respondents numerically and descriptively estimated risk for an individual to develop psychosis in scenarios where they had: A) no affected family members; and B) an affected sibling. Participants comprised 219 affected individuals and 211 first-degree relatives participated. Affected individuals estimated significantly higher risks than relatives. Participants attributed all descriptors between "very low" and "very high" to probabilities of 1%, 10%, 25% and 50%+. For a given numeric probability, different risk descriptors were attributed in different scenarios. Clinically, brief interventions around risk (using either probabilities or descriptors alone) are vulnerable to miscommunication and potentially negative consequences-interventions around risk are best suited to in-depth discussion. Copyright © 2012 Elsevier Ltd. All rights reserved.
Shang, Xiaoyan; Carlson, Michelle C; Tang, Xiaoying
2018-04-30
Total intracranial volume (TIV) is often used as a measure of brain size to correct for individual variability in magnetic resonance imaging (MRI) based morphometric studies. An adjustment of TIV can greatly increase the statistical power of brain morphometry methods. As such, an accurate and precise TIV estimation is of great importance in MRI studies. In this paper, we compared three automated TIV estimation methods (multi-atlas likelihood fusion (MALF), Statistical Parametric Mapping 8 (SPM8) and FreeSurfer (FS)) using longitudinal T1-weighted MR images in a cohort of 70 older participants at elevated sociodemographic risk for Alzheimer's disease. Statistical group comparisons in terms of four different metrics were performed. Furthermore, sex, education level, and intervention status were investigated separately for their impacts on the TIV estimation performance of each method. According to our experimental results, MALF was the least susceptible to atrophy, while SPM8 and FS suffered a loss in precision. In group-wise analysis, MALF was the least sensitive method to group variation, whereas SPM8 was particularly sensitive to sex and FS was unstable with respect to education level. In terms of effectiveness, both MALF and SPM8 delivered a user-friendly performance, while FS was relatively computationally intensive. Copyright © 2018 Elsevier B.V. All rights reserved.
Dehkordi, Parastoo; Garde, Ainara; Karlen, Walter; Wensley, David; Ansermino, J Mark; Dumont, Guy A
2013-01-01
Heart Rate Variability (HRV), the variation of time intervals between heartbeats, is one of the most promising and widely used quantitative markers of autonomic activity. Traditionally, HRV is measured as the series of instantaneous cycle intervals obtained from the electrocardiogram (ECG). In this study, we investigated the estimation of variation in heart rate from a photoplethysmography (PPG) signal, called pulse rate variability (PRV), and assessed its accuracy as an estimate of HRV in children with and without sleep disordered breathing (SDB). We recorded raw PPGs from 72 children using the Phone Oximeter, an oximeter connected to a mobile phone. Full polysomnography including ECG was simultaneously recorded for each subject. We used correlation and Bland-Altman analysis for comparing the parameters of HRV and PRV between two groups of children. Significant correlation (r > 0.90, p < 0.05) and close agreement were found between HRV and PRV for mean intervals, standard deviation of intervals (SDNN) and the root-mean square of the difference of successive intervals (RMSSD). However Bland-Altman analysis showed a large divergence for LF/HF ratio parameter. In addition, children with SDB had depressed SDNN and RMSSD and elevated LF/HF in comparison to children without SDB. In conclusion, PRV provides the accurate estimate of HRV in time domain analysis but does not reflect precise estimation for parameters in frequency domain.
3D TOCSY-HSQC NMR for metabolic flux analysis using non-uniform sampling
Reardon, Patrick N.; Marean-Reardon, Carrie L.; Bukovec, Melanie A.; ...
2016-02-05
13C-Metabolic Flux Analysis ( 13C-MFA) is rapidly being recognized as the authoritative method for determining fluxes through metabolic networks. Site-specific 13C enrichment information obtained using NMR spectroscopy is a valuable input for 13C-MFA experiments. Chemical shift overlaps in the 1D or 2D NMR experiments typically used for 13C-MFA frequently hinder assignment and quantitation of site-specific 13C enrichment. Here we propose the use of a 3D TOCSY-HSQC experiment for 13C-MFA. We employ Non-Uniform Sampling (NUS) to reduce the acquisition time of the experiment to a few hours, making it practical for use in 13C-MFA experiments. Our data show that the NUSmore » experiment is linear and quantitative. Identification of metabolites in complex mixtures, such as a biomass hydrolysate, is simplified by virtue of the 13C chemical shift obtained in the experiment. In addition, the experiment reports 13C-labeling information that reveals the position specific labeling of subsets of isotopomers. As a result, the information provided by this technique will enable more accurate estimation of metabolic fluxes in larger metabolic networks.« less
Li, Dongmei; Guan, Tian; He, Yonghong; Liu, Fang; Yang, Anping; He, Qinghua; Shen, Zhiyuan; Xin, Meiguo
2018-07-01
A new chiral sensor based on weak measurement to accurately measure the optical rotation (OR) has been developed for the estimation of a trace amount of chiral molecule. With the principle of optical weak measurement in frequency domain, the central wavelength shift of output spectra is quantitatively relative to the angle of preselected polarization. Hence, a chiral molecule (e.g., L-amino acid, or D-amino acid) can be enantioselectively determined by modifying the preselection angle with the OR, which will cause the rotation of a polarization plane. The concentration of the chiral sample, corresponding to its optical activity, is quantitatively analyzed with the central wavelength shift of output spectra, which can be collected in real time. Immune to the refractive index change, the proposed chiral sensor is valid in complicated measuring circumstance. The detections of Proline enantiomer concentration in different solvents were implemented. The results demonstrated that weak measurement acted as a reliable method to chiral recognition of Proline enantiomers in diverse circumstance with the merits of high precision and good robustness. In addition, this real-time monitoring approach plays a crucial part in asymmetric synthesis and biological systems. Copyright © 2018. Published by Elsevier B.V.
Nuclear model calculations and their role in space radiation research
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
Nondestructive evaluation using dipole model analysis with a scan type magnetic camera
NASA Astrophysics Data System (ADS)
Lee, Jinyi; Hwang, Jiseong
2005-12-01
Large structures such as nuclear power, thermal power, chemical and petroleum refining plants are drawing interest with regard to the economic aspect of extending component life in respect to the poor environment created by high pressure, high temperature, and fatigue, securing safety from corrosion and exceeding their designated life span. Therefore, technology that accurately calculates and predicts degradation and defects of aging materials is extremely important. Among different methods available, nondestructive testing using magnetic methods is effective in predicting and evaluating defects on the surface of or surrounding ferromagnetic structures. It is important to estimate the distribution of magnetic field intensity for applicable magnetic methods relating to industrial nondestructive evaluation. A magnetic camera provides distribution of a quantitative magnetic field with a homogeneous lift-off and spatial resolution. It is possible to interpret the distribution of magnetic field when the dipole model was introduced. This study proposed an algorithm for nondestructive evaluation using dipole model analysis with a scan type magnetic camera. The numerical and experimental considerations of the quantitative evaluation of several sizes and shapes of cracks using magnetic field images of the magnetic camera were examined.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Evaluating Cardiovascular Health Disparities Using Estimated Race/Ethnicity: A Validation Study.
Bykov, Katsiaryna; Franklin, Jessica M; Toscano, Michele; Rawlins, Wayne; Spettell, Claire M; McMahill-Walraven, Cheryl N; Shrank, William H; Choudhry, Niteesh K
2015-12-01
Methods of estimating race/ethnicity using administrative data are increasingly used to examine and target disparities; however, there has been no validation of these methods using clinically relevant outcomes. To evaluate the validity of the indirect method of race/ethnicity identification based on place of residence and surname for assessing clinically relevant outcomes. A total of 2387 participants in the Post-MI Free Rx Event and Economic Evaluation (MI FREEE) trial who had both self-reported and Bayesian Improved Surname Geocoding method (BISG)-estimated race/ethnicity information available. We used tests of interaction to compare differences in the effect of providing full drug coverage for post-MI medications on adherence and rates of major vascular events or revascularization for white and nonwhite patients based upon self-reported and indirect racial/ethnic assignment. The impact of full coverage on clinical events differed substantially when based upon self-identified race (HR=0.97 for whites, HR=0.65 for nonwhites; interaction P-value=0.05); however, it did not differ among race/ethnicity groups classified using indirect methods (HR=0.87 for white and nonwhites; interaction P-value=0.83). The impact on adherence was the same for self-reported and BISG-estimated race/ethnicity for 2 of the 3 medication classes studied. Quantitatively and qualitatively different results were obtained when indirectly estimated race/ethnicity was used, suggesting that these techniques may not accurately describe aspects of race/ethnicity related to actual health behaviors.
Quantitative prediction of phase transformations in silicon during nanoindentation
NASA Astrophysics Data System (ADS)
Zhang, Liangchi; Basak, Animesh
2013-08-01
This paper establishes the first quantitative relationship between the phases transformed in silicon and the shape characteristics of nanoindentation curves. Based on an integrated analysis using TEM and unit cell properties of phases, the volumes of the phases emerged in a nanoindentation are formulated as a function of pop-out size and depth of nanoindentation impression. This simple formula enables a fast, accurate and quantitative prediction of the phases in a nanoindentation cycle, which has been impossible before.
Heijtel, D F R; Mutsaerts, H J M M; Bakker, E; Schober, P; Stevens, M F; Petersen, E T; van Berckel, B N M; Majoie, C B L M; Booij, J; van Osch, M J P; Vanbavel, E; Boellaard, R; Lammertsma, A A; Nederveen, A J
2014-05-15
Measurements of the cerebral blood flow (CBF) and cerebrovascular reactivity (CVR) provide useful information about cerebrovascular condition and regional metabolism. Pseudo-continuous arterial spin labeling (pCASL) is a promising non-invasive MRI technique to quantitatively measure the CBF, whereas additional hypercapnic pCASL measurements are currently showing great promise to quantitatively assess the CVR. However, the introduction of pCASL at a larger scale awaits further evaluation of the exact accuracy and precision compared to the gold standard. (15)O H₂O positron emission tomography (PET) is currently regarded as the most accurate and precise method to quantitatively measure both CBF and CVR, though it is one of the more invasive methods as well. In this study we therefore assessed the accuracy and precision of quantitative pCASL-based CBF and CVR measurements by performing a head-to-head comparison with (15)O H₂O PET, based on quantitative CBF measurements during baseline and hypercapnia. We demonstrate that pCASL CBF imaging is accurate during both baseline and hypercapnia with respect to (15)O H₂O PET with a comparable precision. These results pave the way for quantitative usage of pCASL MRI in both clinical and research settings. Copyright © 2014 Elsevier Inc. All rights reserved.
Estimating 3D tilt from local image cues in natural scenes
Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.
2016-01-01
Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations. PMID:27738702
NASA Astrophysics Data System (ADS)
Velasco-Forero, Carlos A.; Sempere-Torres, Daniel; Cassiraga, Eduardo F.; Jaime Gómez-Hernández, J.
2009-07-01
Quantitative estimation of rainfall fields has been a crucial objective from early studies of the hydrological applications of weather radar. Previous studies have suggested that flow estimations are improved when radar and rain gauge data are combined to estimate input rainfall fields. This paper reports new research carried out in this field. Classical approaches for the selection and fitting of a theoretical correlogram (or semivariogram) model (needed to apply geostatistical estimators) are avoided in this study. Instead, a non-parametric technique based on FFT is used to obtain two-dimensional positive-definite correlograms directly from radar observations, dealing with both the natural anisotropy and the temporal variation of the spatial structure of the rainfall in the estimated fields. Because these correlation maps can be automatically obtained at each time step of a given rainfall event, this technique might easily be used in operational (real-time) applications. This paper describes the development of the non-parametric estimator exploiting the advantages of FFT for the automatic computation of correlograms and provides examples of its application on a case study using six rainfall events. This methodology is applied to three different alternatives to incorporate the radar information (as a secondary variable), and a comparison of performances is provided. In particular, their ability to reproduce in estimated rainfall fields (i) the rain gauge observations (in a cross-validation analysis) and (ii) the spatial patterns of radar fields are analyzed. Results seem to indicate that the methodology of kriging with external drift [KED], in combination with the technique of automatically computing 2-D spatial correlograms, provides merged rainfall fields with good agreement with rain gauges and with the most accurate approach to the spatial tendencies observed in the radar rainfall fields, when compared with other alternatives analyzed.
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Norman, Mark B; Pithers, Sonia M; Teng, Arthur Y; Waters, Karen A; Sullivan, Colin E
2017-03-01
To validate the Sonomat against polysomnography (PSG) metrics in children and to objectively measure snoring and stertor to produce a quantitative indicator of partial upper airway obstruction that accurately reflects the pathology of pediatric sleep-disordered breathing (SDB). Simultaneous PSG and Sonomat recordings were performed in 76 children (46 male, age 5.8 ± 2.8, BMI = 18.5 ± 3.8 kg/m2). Sleep time, individual respiratory events and the apnea/hypopnea index (AHI) were compared. Obstructed breathing sounds were measured from the unobtrusive non-contact experimental device. There was no significant difference in total sleep time (TST), respiratory events or AHI values, the latter over-estimated by 0.3 events hr-1 by the Sonomat. Poor signal quality was minimal and gender, BMI, and body position did not adversely influence event detection. Obstructive and central events were classified correctly. The number of runs and duration of snoring (13 399 events, 20% TST) and stertor (5748 events, 24% TST) were an order of magnitude greater than respiratory events (1367 events, 1% TST). Many children defined as normal by PSG had just as many or more runs of snoring and stertor as those with mild, moderate and severe obstructive sleep apnea (OSA). The Sonomat accurately diagnoses SDB in children using current metrics. In addition, it permits quantification of partial airway obstruction that can be used to better describe pediatric SDB. Its non-contact design makes it ideal for use in children. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Gomes, Fabio P; Shaw, P Nicholas; Whitfield, Karen; Hewavitharana, Amitha K
2015-09-03
Milk is an important source of nutrients for various risk populations, including infants. The accurate measurement of vitamin D in milk is necessary to provide adequate supplementation advice for risk groups and to monitor regulatory compliance. Currently used liquid chromatography-tandem mass spectrometry (LC-MS/MS) methods are capable of measuring only four analogues of vitamin D in unfortified milk. We report here an accurate quantitative analytical method for eight analogues of vitamin D: Vitamin D2 and D3 (D2 and D3), 25-hydroxy D2 and D3, 24,25-dihydroxy D2 and D3, and 1,25-dihydroxyD2 and D3. In this study, we compared saponification and protein precipitation for the extraction of vitamin D from milk and found the latter to be more effective. We also optimised the pre-column derivatisation using 4-phenyl-l,2,4-triazoline-3,5-dione (PTAD), to achieve the highest sensitivity and accuracy for all major vitamin D forms in milk. Chromatography was optimised to reduce matrix effects such as ion-suppression, and the matrix effects were eliminated using co-eluting stable isotope labelled internal standards for the calibration of each analogue. The analogues, 25-hydroxyD3 (25(OH)D3) and its epimer (3-epi-25(OH)D3) were chromatographically resolved, to prevent over-estimation of 25(OH)D3. The method was validated and subsequently applied for the measurement of total vitamin D levels in human, cow, mare, goat and sheep milk samples. The detection limits, repeatability standard deviations, and recovery ranges were from 0.2 to 0.4 femtomols, 6.30-13.5%, and 88.2-105%, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, H; Zhou, B; Beidokhti, D
Purpose: To investigate the feasibility of accurate quantification of iodine mass thickness in contrast-enhanced spectral mammography. Methods: Experimental phantom studies were performed on a spectral mammography system based on Si strip photon-counting detectors. Dual-energy images were acquired using 40 kVp and a splitting energy of 34 keV with 3 mm Al pre-filtration. The initial calibration was done with glandular and adipose tissue equivalent phantoms of uniform thicknesses and iodine disk phantoms of various concentrations. A secondary calibration was carried out using the iodine signal obtained from the dual-energy decomposed images and the known background phantom thicknesses and densities. The iodinemore » signal quantification method was validated using phantoms composed of a mixture of glandular and adipose materials, for various breast thicknesses and densities. Finally, the traditional dual-energy weighted subtraction method was also studied as a comparison. The measured iodine signal from both methods was compared to the known iodine concentrations of the disk phantoms to characterize the quantification accuracy. Results: There was good agreement between the iodine mass thicknesses measured using the proposed method and the known values. The root-mean-square (RMS) error was estimated to be 0.2 mg/cm2. The traditional weighted subtraction method also predicted a linear correlation between the measured signal and the known iodine mass thickness. However, the correlation slope and offset values were strongly dependent on the total breast thickness and density. Conclusion: The results of the current study suggest that iodine mass thickness can be accurately quantified with contrast-enhanced spectral mammography. The quantitative information can potentially improve the differentiation between benign and malignant legions. Grant funding from Philips Medical Systems.« less
Nair, Pradeep K; Carr, Jeffrey G; Bigelow, Brian; Bhatt, Deepak L; Berwick, Zachary C; Adams, George
2018-01-01
Proper vessel sizing during endovascular interventions is crucial to avoid adverse procedural and clinical outcomes. LumenRECON (LR) is a novel, nonimaging, 0.035-inch wire-based technology that uses the physics-based principle of Ohm's law to provide a simple, real-time luminal size while also providing a platform for therapy delivery. This study evaluated the accuracy, reliability, and safety of the LR system in patients presenting for a femoropopliteal artery intervention. This multicenter, prospective pilot study of 24 patients presenting for peripheral intervention compared LR measurements of femoropopliteal artery size to angiographic visual estimation, duplex ultrasound, quantitative angiography, and intravascular ultrasound. The primary effectiveness and safety end point was comparison against core laboratory adjudicated intravascular ultrasound values and major adverse events, respectively. Additional preclinical studies were also performed in vitro and in vivo in swine to determine the accuracy of the LR guidewire system. No intra- or postprocedure device-related adverse events occurred. A balloon or stent was successfully delivered in 12 patients (50%) over the LR wire. Differences in repeatability between successive LR measurements was 2.5±0.40% ( R 2 =0.96) with no significant bias. Differences in measurements of LR to other modalities were 0.5±1.7%, 5.0±1.8%, -1.5±2.0%, and 6.8±3.4% for intravascular ultrasound core laboratory, quantitative angiography, angiographic, and duplex ultrasound, respectively. This study demonstrates that through a physics-based principle, LR provides a real-time, safe, reproducible, and accurate vessel size of the femoropopliteal artery during intervention and can additionally serve as a conduit for therapy delivery over its wire-based platform. © 2018 American Heart Association, Inc.
Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124
NASA Astrophysics Data System (ADS)
Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.
2015-03-01
Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.
Christensen, Geoff A; Wymore, Ann M; King, Andrew J; Podar, Mircea; Hurt, Richard A; Santillan, Eugenio U; Soren, Ally; Brandt, Craig C; Brown, Steven D; Palumbo, Anthony V; Wall, Judy D; Gilmour, Cynthia C; Elias, Dwayne A
2016-10-01
Two genes, hgcA and hgcB, are essential for microbial mercury (Hg) methylation. Detection and estimation of their abundance, in conjunction with Hg concentration, bioavailability, and biogeochemistry, are critical in determining potential hot spots of methylmercury (MeHg) generation in at-risk environments. We developed broad-range degenerate PCR primers spanning known hgcAB genes to determine the presence of both genes in diverse environments. These primers were tested against an extensive set of pure cultures with published genomes, including 13 Deltaproteobacteria, nine Firmicutes, and nine methanogenic Archaea genomes. A distinct PCR product at the expected size was confirmed for all hgcAB(+) strains tested via Sanger sequencing. Additionally, we developed clade-specific degenerate quantitative PCR (qPCR) primers that targeted hgcA for each of the three dominant Hg-methylating clades. The clade-specific qPCR primers amplified hgcA from 64%, 88%, and 86% of tested pure cultures of Deltaproteobacteria, Firmicutes, and Archaea, respectively, and were highly specific for each clade. Amplification efficiencies and detection limits were quantified for each organism. Primer sensitivity varied among species based on sequence conservation. Finally, to begin to evaluate the utility of our primer sets in nature, we tested hgcA and hgcAB recovery from pure cultures spiked into sand and soil. These novel quantitative molecular tools designed in this study will allow for more accurate identification and quantification of the individual Hg-methylating groups of microorganisms in the environment. The resulting data will be essential in developing accurate and robust predictive models of Hg methylation potential, ideally integrating the geochemistry of Hg methylation to the microbiology and genetics of hgcAB IMPORTANCE: The neurotoxin methylmercury (MeHg) poses a serious risk to human health. MeHg production in nature is associated with anaerobic microorganisms. The recent discovery of the Hg-methylating gene pair, hgcA and hgcB, has allowed us to design and optimize molecular probes against these genes within the genomic DNA for microorganisms known to methylate Hg. The protocols designed in this study allow for both qualitative and quantitative assessments of pure-culture or environmental samples. With these protocols in hand, we can begin to study the distribution of Hg-methylating organisms in nature via a cultivation-independent strategy. Copyright © 2016 Christensen et al.
NASA Astrophysics Data System (ADS)
Koch, Wolfgang
1996-05-01
Sensor data processing in a dense target/dense clutter environment is inevitably confronted with data association conflicts which correspond with the multiple hypothesis character of many modern approaches (MHT: multiple hypothesis tracking). In this paper we analyze the efficiency of retrodictive techniques that generalize standard fixed interval smoothing to MHT applications. 'Delayed estimation' based on retrodiction provides uniquely interpretable and accurate trajectories from ambiguous MHT output if a certain time delay is tolerated. In a Bayesian framework the theoretical background of retrodiction and its intimate relation to Bayesian MHT is sketched. By a simulated example with two closely-spaced targets, relatively low detection probabilities, and rather high false return densities, we demonstrate the benefits of retrodiction and quantitatively discuss the achievable track accuracies and the time delays involved for typical radar parameters.
Diagnosis of Fanconi Anemia: Chromosomal Breakage Analysis
Oostra, Anneke B.; Nieuwint, Aggie W. M.; Joenje, Hans; de Winter, Johan P.
2012-01-01
Fanconi anemia (FA) is a rare inherited syndrome with diverse clinical symptoms including developmental defects, short stature, bone marrow failure, and a high risk of malignancies. Fifteen genetic subtypes have been distinguished so far. The mode of inheritance for all subtypes is autosomal recessive, except for FA-B, which is X-linked. Cells derived from FA patients are—by definition—hypersensitive to DNA cross-linking agents, such as mitomycin C, diepoxybutane, or cisplatinum, which becomes manifest as excessive growth inhibition, cell cycle arrest, and chromosomal breakage upon cellular exposure to these drugs. Here we provide a detailed laboratory protocol for the accurate assessment of the FA diagnosis as based on mitomycin C-induced chromosomal breakage analysis in whole-blood cultures. The method also enables a quantitative estimate of the degree of mosaicism in the lymphocyte compartment of the patient. PMID:22693659
Sublimation rate of molecular crystals - role of internal degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiti, A; Zepeda-Ruiz, L A; Gee, R H
2007-01-19
It is a common practice to estimate site desorption rate from crystal surfaces with an Arrhenius expression of the form v{sub eff} exp(-{Delta}E/k{sub B}T), where {Delta}E is an activation barrier to desorb and v{sub eff} is an effective vibrational frequency {approx} 10{sup 12} sec{sup -1}. However, such a formula can lead to several to many orders of magnitude underestimation of sublimation rates in molecular crystals due to internal degrees of freedom. We carry out a quantitative comparison of two energetic molecular crystals with crystals of smaller entities like ice and Argon (solid) and uncover the errors involved as a functionmore » of molecule size. In the process, we also develop a formal definition of v{sub eff} and an accurate working expression for equilibrium vapor pressure.« less
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.
Rapid measurement of 3J(H N-H alpha) and 3J(N-H beta) coupling constants in polypeptides.
Barnwal, Ravi Pratap; Rout, Ashok K; Chary, Kandala V R; Atreya, Hanudatta S
2007-12-01
We present two NMR experiments, (3,2)D HNHA and (3,2)D HNHB, for rapid and accurate measurement of 3J(H N-H alpha) and 3J(N-H beta) coupling constants in polypeptides based on the principle of G-matrix Fourier transform NMR spectroscopy and quantitative J-correlation. These experiments, which facilitate fast acquisition of three-dimensional data with high spectral/digital resolution and chemical shift dispersion, will provide renewed opportunities to utilize them for sequence specific resonance assignments, estimation/characterization of secondary structure with/without prior knowledge of resonance assignments, stereospecific assignment of prochiral groups and 3D structure determination, refinement and validation. Taken together, these experiments have a wide range of applications from structural genomics projects to studying structure and folding in polypeptides.
The costs of introducing new technologies into space systems
NASA Technical Reports Server (NTRS)
Dodson, E. N.; Partma, H.; Ruhland, W.
1992-01-01
A review is conducted of cost-research studies intended to provide guidelines for cost estimates of integrating new technologies into existing satellite systems. Quantitative methods are described for determining the technological state-of-the-art so that proposed programs can be evaluated accurately in terms of their contribution to technological development. The R&D costs associated with the proposed programs are then assessed with attention given to the technological advances. Also incorporated quantifiably are any reductions in the costs of production, operations, and support afforded by the advanced technologies. The proposed model is employed in relation to a satellite sizing and cost study in which a tradeoff between increased R&D costs and reduced production costs is examined. The technology/cost model provides a consistent yardstick for assessing the true relative economic impact of introducing novel techniques and technologies.
Breast ultrasound tomography with two parallel transducer arrays: preliminary clinical results
NASA Astrophysics Data System (ADS)
Huang, Lianjie; Shin, Junseob; Chen, Ting; Lin, Youzuo; Intrator, Miranda; Hanson, Kenneth; Epstein, Katherine; Sandoval, Daniel; Williamson, Michael
2015-03-01
Ultrasound tomography has great potential to provide quantitative estimations of physical properties of breast tumors for accurate characterization of breast cancer. We design and manufacture a new synthetic-aperture breast ultrasound tomography system with two parallel transducer arrays. The distance of these two transducer arrays is adjustable for scanning breasts with different sizes. The ultrasound transducer arrays are translated vertically to scan the entire breast slice by slice and acquires ultrasound transmission and reflection data for whole-breast ultrasound imaging and tomographic reconstructions. We use the system to acquire patient data at the University of New Mexico Hospital for clinical studies. We present some preliminary imaging results of in vivo patient ultrasound data. Our preliminary clinical imaging results show promising of our breast ultrasound tomography system with two parallel transducer arrays for breast cancer imaging and characterization.
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
Ma, Shuguang; Li, Zhiling; Lee, Keun-Joong; Chowdhury, Swapan K
2010-12-20
A simple, reliable, and accurate method was developed for quantitative assessment of metabolite coverage in preclinical safety species by mixing equal volumes of human plasma with blank plasma of animal species and vice versa followed by an analysis using high-resolution full-scan accurate mass spectrometry. This approach provided comparable results (within (±15%) to those obtained from regulated bioanalysis and did not require synthetic standards or radiolabeled compounds. In addition, both qualitative and quantitative data were obtained from a single LC-MS analysis on all metabolites and, therefore, the coverage of any metabolite of interest can be obtained.
Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.
Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse
2018-05-01
Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.
ERIC Educational Resources Information Center
Pobocik, Tamara J.
2013-01-01
The use of technology and electronic medical records in healthcare has exponentially increased. This quantitative research project used a pretest/posttest design, and reviewed how an educational electronic documentation system helped nursing students to identify the accurate related to statement of the nursing diagnosis for the patient in the case…
Smile line assessment comparing quantitative measurement and visual estimation.
Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie
2011-02-01
Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
DISAGGREGATION OF GOES LAND SURFACE TEMPERATURES USING SURFACE EMISSIVITY
USDA-ARS?s Scientific Manuscript database
Accurate temporal and spatial estimation of land surface temperatures (LST) is important for modeling the hydrological cycle at field to global scales because LSTs can improve estimates of soil moisture and evapotranspiration. Using remote sensing satellites, accurate LSTs could be routine, but unfo...