Sample records for obtaining quantitative estimates

  1. Repeatability Assessment by ISO 11843-7 in Quantitative HPLC for Herbal Medicines.

    PubMed

    Chen, Liangmian; Kotani, Akira; Hakamata, Hideki; Tsutsumi, Risa; Hayashi, Yuzuru; Wang, Zhimin; Kusu, Fumiyo

    2015-01-01

    We have proposed an assessment methods to estimate the measurement relative standard deviation (RSD) of chromatographic peaks in quantitative HPLC for herbal medicines by the methodology of ISO 11843 Part 7 (ISO 11843-7:2012), which provides detection limits stochastically. In quantitative HPLC with UV detection (HPLC-UV) of Scutellaria Radix for the determination of baicalin, the measurement RSD of baicalin by ISO 11843-7:2012 stochastically was within a 95% confidence interval of the statistically obtained RSD by repetitive measurements (n = 6). Thus, our findings show that it is applicable for estimating of the repeatability of HPLC-UV for determining baicalin without repeated measurements. In addition, the allowable limit of the "System repeatability" in "Liquid Chromatography" regulated in a pharmacopoeia can be obtained by the present assessment method. Moreover, the present assessment method was also successfully applied to estimate the measurement RSDs of quantitative three-channel liquid chromatography with electrochemical detection (LC-3ECD) of Chrysanthemi Flos for determining caffeoylquinic acids and flavonoids. By the present repeatability assessment method, reliable measurement RSD was obtained stochastically, and the experimental time was remarkably reduced.

  2. Uniform gradient estimates on manifolds with a boundary and applications

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Juan; Thalmaier, Anton; Thompson, James

    2018-04-01

    We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.

  3. On differences of linear positive operators

    NASA Astrophysics Data System (ADS)

    Aral, Ali; Inoan, Daniela; Raşa, Ioan

    2018-04-01

    In this paper we consider two different general linear positive operators defined on unbounded interval and obtain estimates for the differences of these operators in quantitative form. Our estimates involve an appropriate K-functional and a weighted modulus of smoothness. Similar estimates are obtained for Chebyshev functional of these operators as well. All considerations are based on rearrangement of the remainder in Taylor's formula. The obtained results are applied for some well known linear positive operators.

  4. Inter-rater reliability of motor unit number estimates and quantitative motor unit analysis in the tibialis anterior muscle.

    PubMed

    Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L

    2009-05-01

    To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.

  5. Measurement of lung expansion with computed tomography and comparison with quantitative histology.

    PubMed

    Coxson, H O; Mayo, J R; Behzad, H; Moore, B J; Verburgt, L M; Staples, C A; Paré, P D; Hogg, J C

    1995-11-01

    The total and regional lung volumes were estimated from computed tomography (CT), and the pleural pressure gradient was determined by using the milliliters of gas per gram of tissue estimated from the X-ray attenuation values and the pressure-volume curve of the lung. The data show that CT accurately estimated the volume of the resected lobe but overestimated its weight by 24 +/- 19%. The volume of gas per gram of tissue was less in the gravity-dependent regions due to a pleural pressure gradient of 0.24 +/- 0.08 cmH2O/cm of descent in the thorax. The proportion of tissue to air obtained with CT was similar to that obtained by quantitative histology. We conclude that the CT scan can be used to estimate total and regional lung volumes and that measurements of the proportions of tissue and air within the thorax by CT can be used in conjunction with quantitative histology to evaluate lung structure.

  6. Voronovskaja's theorem revisited

    NASA Astrophysics Data System (ADS)

    Tachev, Gancho T.

    2008-07-01

    We represent a new quantitative variant of Voronovskaja's theorem for Bernstein operator. This estimate improves the recent quantitative versions of Voronovskaja's theorem for certain Bernstein-type operators, obtained by H. Gonska, P. Pitul and I. Rasa in 2006.

  7. Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model

    PubMed Central

    Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.

    2012-01-01

    Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315

  8. New semi-quantitative 123I-MIBG estimation method compared with scoring system in follow-up of advanced neuroblastoma: utility of total MIBG retention ratio versus scoring method.

    PubMed

    Sano, Yuko; Okuyama, Chio; Iehara, Tomoko; Matsushima, Shigenori; Yamada, Kei; Hosoi, Hajime; Nishimura, Tsunehiko

    2012-07-01

    The purpose of this study is to evaluate a new semi-quantitative estimation method using (123)I-MIBG retention ratio to assess response to chemotherapy for advanced neuroblastoma. Thirteen children with advanced neuroblastoma (International Neuroblastoma Risk Group Staging System: stage M) were examined for a total of 51 studies with (123)I-MIBG scintigraphy (before and during chemotherapy). We proposed a new semi-quantitative method using MIBG retention ratio (count obtained with delayed image/count obtained with early image with decay correction) to estimate MIBG accumulation. We analyzed total (123)I-MIBG retention ratio (TMRR: total body count obtained with delayed image/total body count obtained with early image with decay correction) and compared with a scoring method in terms of correlation with tumor markers. TMRR showed significantly higher correlations with urinary catecholamine metabolites before chemotherapy (VMA: r(2) = 0.45, P < 0.05, HVA: r(2) = 0.627, P < 0.01) than MIBG score (VMA: r(2) = 0.19, P = 0.082, HVA: r(2) = 0.25, P = 0.137). There were relatively good correlations between serial change of TMRR and those of urinary catecholamine metabolites (VMA: r(2) = 0.274, P < 0.001, HVA: r(2) = 0.448, P < 0.0001) compared with serial change of MIBG score and those of tumor markers (VMA: r(2) = 0.01, P = 0.537, HVA: 0.084, P = 0.697) during chemotherapy for advanced neuroblastoma. TMRR could be a useful semi-quantitative method for estimating early response to chemotherapy of advanced neuroblastoma because of its high correlation with urine catecholamine metabolites.

  9. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    PubMed

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  10. 76 FR 50904 - Thiamethoxam; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... exposure and risk. A separate assessment was done for clothianidin. i. Acute exposure. Quantitative acute... not expected to pose a cancer risk, a quantitative dietary exposure assessment for the purposes of...-dietary sources of post application exposure to obtain an estimate of potential combined exposure. These...

  11. QuantFusion: Novel Unified Methodology for Enhanced Coverage and Precision in Quantifying Global Proteomic Changes in Whole Tissues.

    PubMed

    Gunawardena, Harsha P; O'Brien, Jonathon; Wrobel, John A; Xie, Ling; Davies, Sherri R; Li, Shunqiang; Ellis, Matthew J; Qaqish, Bahjat F; Chen, Xian

    2016-02-01

    Single quantitative platforms such as label-based or label-free quantitation (LFQ) present compromises in accuracy, precision, protein sequence coverage, and speed of quantifiable proteomic measurements. To maximize the quantitative precision and the number of quantifiable proteins or the quantifiable coverage of tissue proteomes, we have developed a unified approach, termed QuantFusion, that combines the quantitative ratios of all peptides measured by both LFQ and label-based methodologies. Here, we demonstrate the use of QuantFusion in determining the proteins differentially expressed in a pair of patient-derived tumor xenografts (PDXs) representing two major breast cancer (BC) subtypes, basal and luminal. Label-based in-spectra quantitative peptides derived from amino acid-coded tagging (AACT, also known as SILAC) of a non-malignant mammary cell line were uniformly added to each xenograft with a constant predefined ratio, from which Ratio-of-Ratio estimates were obtained for the label-free peptides paired with AACT peptides in each PDX tumor. A mixed model statistical analysis was used to determine global differential protein expression by combining complementary quantifiable peptide ratios measured by LFQ and Ratio-of-Ratios, respectively. With minimum number of replicates required for obtaining the statistically significant ratios, QuantFusion uses the distinct mechanisms to "rescue" the missing data inherent to both LFQ and label-based quantitation. Combined quantifiable peptide data from both quantitative schemes increased the overall number of peptide level measurements and protein level estimates. In our analysis of the PDX tumor proteomes, QuantFusion increased the number of distinct peptide ratios by 65%, representing differentially expressed proteins between the BC subtypes. This quantifiable coverage improvement, in turn, not only increased the number of measurable protein fold-changes by 8% but also increased the average precision of quantitative estimates by 181% so that some BC subtypically expressed proteins were rescued by QuantFusion. Thus, incorporating data from multiple quantitative approaches while accounting for measurement variability at both the peptide and global protein levels make QuantFusion unique for obtaining increased coverage and quantitative precision for tissue proteomes. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts

    PubMed Central

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304

  13. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  14. Comparative Application of PLS and PCR Methods to Simultaneous Quantitative Estimation and Simultaneous Dissolution Test of Zidovudine - Lamivudine Tablets.

    PubMed

    Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli

    2015-01-01

    In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.

  15. Improved quantitative analysis of spectra using a new method of obtaining derivative spectra based on a singular perturbation technique.

    PubMed

    Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan

    2015-06-01

    Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.

  16. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  17. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  18. Benzene exposure in the petroleum distribution industry associated with leukemia in the United Kingdom: overview of the methodology of a case-control study.

    PubMed Central

    Rushton, L

    1996-01-01

    This paper describes basic principles underlying the methodology for obtaining quantitative estimates of benzene exposure in the petroleum marketing and distribution industry. Work histories for 91 cases of leukemia and 364 matched controls (4 per case) identified for a cohort of oil distribution workers up to the end of 1992 were obtained, primarily from personnel records. Information on the distribution sites, more than 90% of which were closed at the time of data collection, was obtained from site visits and archive material. Industrial hygiene measurements measured under known conditions were assembled for different tasks. These were adjusted for conditions where measured data were not available using variables known to influence exposure, such as temperature, technology, percentage of benzene in fuel handled, products handled, number of loads, and job activity. A quantitative estimate of dermal contact and peak exposure was also made. PMID:9118922

  19. Genomic Quantitative Genetics to Study Evolution in the Wild.

    PubMed

    Gienapp, Phillip; Fior, Simone; Guillaume, Frédéric; Lasky, Jesse R; Sork, Victoria L; Csilléry, Katalin

    2017-12-01

    Quantitative genetic theory provides a means of estimating the evolutionary potential of natural populations. However, this approach was previously only feasible in systems where the genetic relatedness between individuals could be inferred from pedigrees or experimental crosses. The genomic revolution opened up the possibility of obtaining the realized proportion of genome shared among individuals in natural populations of virtually any species, which could promise (more) accurate estimates of quantitative genetic parameters in virtually any species. Such a 'genomic' quantitative genetics approach relies on fewer assumptions, offers a greater methodological flexibility, and is thus expected to greatly enhance our understanding of evolution in natural populations, for example, in the context of adaptation to environmental change, eco-evolutionary dynamics, and biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Food Consumption and Handling Survey for Quantitative Microbiological Consumer Phase Risk Assessments.

    PubMed

    Chardon, Jurgen; Swart, Arno

    2016-07-01

    In the consumer phase of a typical quantitative microbiological risk assessment (QMRA), mathematical equations identify data gaps. To acquire useful data we designed a food consumption and food handling survey (2,226 respondents) for QMRA applications that is especially aimed at obtaining quantitative data. For a broad spectrum of food products, the survey covered the following topics: processing status at retail, consumer storage, preparation, and consumption. Questions were designed to facilitate distribution fitting. In the statistical analysis, special attention was given to the selection of the most adequate distribution to describe the data. Bootstrap procedures were used to describe uncertainty. The final result was a coherent quantitative consumer phase food survey and parameter estimates for food handling and consumption practices in The Netherlands, including variation over individuals and uncertainty estimates.

  1. Comparison Study of MS-HRM and Pyrosequencing Techniques for Quantification of APC and CDKN2A Gene Methylation

    PubMed Central

    Migheli, Francesca; Stoccoro, Andrea; Coppedè, Fabio; Wan Omar, Wan Adnan; Failli, Alessandra; Consolini, Rita; Seccia, Massimo; Spisni, Roberto; Miccoli, Paolo; Mathers, John C.; Migliore, Lucia

    2013-01-01

    There is increasing interest in the development of cost-effective techniques for the quantification of DNA methylation biomarkers. We analyzed 90 samples of surgically resected colorectal cancer tissues for APC and CDKN2A promoter methylation using methylation sensitive-high resolution melting (MS-HRM) and pyrosequencing. MS-HRM is a less expensive technique compared with pyrosequencing but is usually more limited because it gives a range of methylation estimates rather than a single value. Here, we developed a method for deriving single estimates, rather than a range, of methylation using MS-HRM and compared the values obtained in this way with those obtained using the gold standard quantitative method of pyrosequencing. We derived an interpolation curve using standards of known methylated/unmethylated ratio (0%, 12.5%, 25%, 50%, 75%, and 100% of methylation) to obtain the best estimate of the extent of methylation for each of our samples. We observed similar profiles of methylation and a high correlation coefficient between the two techniques. Overall, our new approach allows MS-HRM to be used as a quantitative assay which provides results which are comparable with those obtained by pyrosequencing. PMID:23326336

  2. Quantitative estimation of film forming polymer-plasticizer interactions by the Lorentz-Lorenz Law.

    PubMed

    Dredán, J; Zelkó, R; Dávid, A Z; Antal, I

    2006-03-09

    Molar refraction as well as refractive index has many uses. Beyond confirming the identity and purity of a compound, determination of molecular structure and molecular weight, molar refraction is also used in other estimation schemes, such as in critical properties, surface tension, solubility parameter, molecular polarizability, dipole moment, etc. In the present study molar refraction values of polymer dispersions were determined for the quantitative estimation of film forming polymer-plasticizer interactions. Information can be obtained concerning the extent of interaction between the polymer and the plasticizer from the calculation of molar refraction values of film forming polymer dispersions containing plasticizer.

  3. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.

    PubMed

    Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris

    2016-04-21

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  4. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung

    NASA Astrophysics Data System (ADS)

    Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris

    2016-04-01

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  5. APPLICATION OF RADIOISOTOPES TO THE QUANTITATIVE CHROMATOGRAPHY OF FATTY ACIDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budzynski, A.Z.; Zubrzycki, Z.J.; Campbell, I.G.

    1959-10-31

    The paper reports work done on the use of I/sup 131/, Zn/sup 65/, Sr/sup 90/, Zr/sup 95/, Ce/sup 144/ for the quantitative estimation of fatty acids on paper chromatograms, and for determination of the degree of usaturation of components of resolved fatty acid mixtures. I/sup 131/ is used to iodinate unsaturated fatty acids, and the amount of such acids is determined from the radiochromatogram. The degree of unsaturation of fatty acids is determined by estimation of the specific activiiy of spots. The other isotopes have been examined from the point of view of their suitability for estimation of total amountsmore » of fatty acids by formation of insoluble radioactive soaps held on the chromatogram. In particular, work is reported on the quantitative estimation of saturated fatty acids by measurement of the activity of their insoluble soaps with radioactive metals. Various quantitative relationships are described between amount of fatty acid in spot and such parameters as radiometrically estimated spot length, width, maximum intensity, and integrated spot activity. A convenient detection apparatus for taking radiochromatograms is also described. In conjunction with conventional chromatographic methods for resolving fatty acids the method permits the estimation of composition of fatty acid mixtures obtained from biological material. (auth)« less

  6. Application of real-time PCR for total airborne bacterial assessment: Comparison with epifluorescence microscopy and culture-dependent methods

    NASA Astrophysics Data System (ADS)

    Rinsoz, Thomas; Duquenne, Philippe; Greff-Mirguet, Guylaine; Oppliger, Anne

    Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count non-culturable or non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescence microscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the "impaction on nutrient agar" method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria.

  7. Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors

    NASA Astrophysics Data System (ADS)

    Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose

    2018-03-01

    In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.

  8. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    NASA Astrophysics Data System (ADS)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  9. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    NASA Astrophysics Data System (ADS)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  10. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  11. Quantitative water content mapping at clinically relevant field strengths: a comparative study at 1.5 T and 3 T.

    PubMed

    Abbas, Zaheer; Gras, Vincent; Möllenhoff, Klaus; Oros-Peusquens, Ana-Maria; Shah, Nadim Joni

    2015-02-01

    Quantitative water content mapping in vivo using MRI is a very valuable technique to detect, monitor and understand diseases of the brain. At 1.5 T, this technology has already been successfully used, but it has only recently been applied at 3T because of significantly increased RF field inhomogeneity at the higher field strength. To validate the technology at 3T, we estimate and compare in vivo quantitative water content maps at 1.5 T and 3T obtained with a protocol proposed recently for 3T MRI. The proposed MRI protocol was applied on twenty healthy subjects at 1.5 T and 3T; the same post-processing algorithms were used to estimate the water content maps. The 1.5 T and 3T maps were subsequently aligned and compared on a voxel-by-voxel basis. Statistical analysis was performed to detect possible differences between the estimated 1.5 T and 3T water maps. Our analysis indicates that the water content values obtained at 1.5 T and 3T did not show significant systematic differences. On average the difference did not exceed the standard deviation of the water content at 1.5 T. Furthermore, the contrast-to-noise ratio (CNR) of the estimated water content map was increased at 3T by a factor of at least 1.5. Vulnerability to RF inhomogeneity increases dramatically with the increasing static magnetic field strength. However, using advanced corrections for the sensitivity profile of the MR coils, it is possible to preserve quantitative accuracy while benefiting from the increased CNR at the higher field strength. Indeed, there was no significant difference in the water content values obtained in the brain at 1.5 T and 3T. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.

    2008-08-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.

  13. 48 CFR 15.404-1 - Proposal analysis techniques.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... are: I Price Analysis, II Quantitative Techniques for Contract Pricing, III Cost Analysis, IV Advanced... estimates. (vi) Comparison of proposed prices with prices obtained through market research for the same or...

  14. 48 CFR 15.404-1 - Proposal analysis techniques.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... are: I Price Analysis, II Quantitative Techniques for Contract Pricing, III Cost Analysis, IV Advanced... estimates. (vi) Comparison of proposed prices with prices obtained through market research for the same or...

  15. 48 CFR 15.404-1 - Proposal analysis techniques.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... are: I Price Analysis, II Quantitative Techniques for Contract Pricing, III Cost Analysis, IV Advanced... estimates. (vi) Comparison of proposed prices with prices obtained through market research for the same or...

  16. Approximation by the iterates of Bernstein operator

    NASA Astrophysics Data System (ADS)

    Zapryanova, Teodora; Tachev, Gancho

    2012-11-01

    We study the degree of pointwise approximation of the iterated Bernstein operators to its limiting operator. We obtain a quantitative estimates related to the conjecture of Gonska and Raşa from 2006.

  17. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  18. Estimating weak ratiometric signals in imaging data. II. Meta-analysis with multiple, dual-channel datasets.

    PubMed

    Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D

    2008-09-01

    Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.

  19. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward.  The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226

  20. Conventional liquid chromatography/triple quadrupole mass spectrometer-based metabolite identification and semi-quantitative estimation approach in the investigation of dabigatran etexilate in vitro metabolism

    PubMed Central

    Hu, Zhe-Yi; Parker, Robert B.; Herring, Vanessa L.; Laizure, S. Casey

    2012-01-01

    Dabigatran etexilate (DABE) is an oral prodrug that is rapidly converted by esterases to dabigatran (DAB), a direct inhibitor of thrombin. To elucidate the esterase-mediated metabolic pathway of DABE, a high-performance liquid chromatography/mass spectrometer (LC-MS/MS)-based metabolite identification and semi-quantitative estimation approach was developed. To overcome the poor full-scan sensitivity of conventional triple quadrupole mass spectrometry, precursor-product ion pairs were predicted, to search for the potential in vitro metabolites. The detected metabolites were confirmed by the product ion scan. A dilution method was introduced to evaluate the matrix effects of tentatively identified metabolites without chemical standards. Quantitative information on detected metabolites was obtained using ‘metabolite standards’ generated from incubation samples that contain a high concentration of metabolite in combination with a correction factor for mass spectrometry response. Two in vitro metabolites of DABE (M1 and M2) were identified, and quantified by the semi-quantitative estimation approach. It is noteworthy that CES1 convert DABE to M1 while CES2 mediates the conversion of DABE to M2. M1 (or M2) was further metabolized to DAB by CES2 (or CES1). The approach presented here provides a solution to a bioanalytical need for fast identification and semi-quantitative estimation of CES metabolites in preclinical samples. PMID:23239178

  1. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.

  2. A quantitative estimate of schema abnormality in socially anxious and non-anxious individuals.

    PubMed

    Wenzel, Amy; Brendle, Jennifer R; Kerr, Patrick L; Purath, Donna; Ferraro, F Richard

    2007-01-01

    Although cognitive theories of anxiety suggest that anxious individuals are characterized by abnormal threat-relevant schemas, few empirical studies have estimated the nature of these cognitive structures using quantitative methods that lend themselves to inferential statistical analysis. In the present study, socially anxious (n = 55) and non-anxious (n = 62) participants completed 3 Q-Sort tasks to assess their knowledge of events that commonly occur in social or evaluative scenarios. Participants either sorted events according to how commonly they personally believe the events occur (i.e. "self" condition), or to how commonly they estimate that most people believe they occur (i.e. "other" condition). Participants' individual Q-Sorts were correlated with mean sorts obtained from a normative sample to obtain an estimate of schema abnormality, with lower correlations representing greater levels of abnormality. Relative to non-anxious participants, socially anxious participants' sorts were less strongly associated with sorts of the normative sample, particularly in the "self" condition, although secondary analyses suggest that some significant results might be explained, in part, by depression and experience with the scenarios. These results provide empirical support for the theoretical notion that threat-relevant self-schemas of anxious individuals are characterized by some degree of abnormality.

  3. Estimation of the genome sizes of the chigger mites Leptotrombidium pallidum and Leptotrombidium scutellare based on quantitative PCR and k-mer analysis

    PubMed Central

    2014-01-01

    Background Leptotrombidium pallidum and Leptotrombidium scutellare are the major vector mites for Orientia tsutsugamushi, the causative agent of scrub typhus. Before these organisms can be subjected to whole-genome sequencing, it is necessary to estimate their genome sizes to obtain basic information for establishing the strategies that should be used for genome sequencing and assembly. Method The genome sizes of L. pallidum and L. scutellare were estimated by a method based on quantitative real-time PCR. In addition, a k-mer analysis of the whole-genome sequences obtained through Illumina sequencing was conducted to verify the mutual compatibility and reliability of the results. Results The genome sizes estimated using qPCR were 191 ± 7 Mb for L. pallidum and 262 ± 13 Mb for L. scutellare. The k-mer analysis-based genome lengths were estimated to be 175 Mb for L. pallidum and 286 Mb for L. scutellare. The estimates from these two independent methods were mutually complementary and within a similar range to those of other Acariform mites. Conclusions The estimation method based on qPCR appears to be a useful alternative when the standard methods, such as flow cytometry, are impractical. The relatively small estimated genome sizes should facilitate whole-genome analysis, which could contribute to our understanding of Arachnida genome evolution and provide key information for scrub typhus prevention and mite vector competence. PMID:24947244

  4. Estimations of BCR-ABL/ABL transcripts by quantitative PCR in chronic myeloid leukaemia after allogeneic bone marrow transplantation and donor lymphocyte infusion.

    PubMed

    Otazú, Ivone B; Tavares, Rita de Cassia B; Hassan, Rocío; Zalcberg, Ilana; Tabak, Daniel G; Seuánez, Héctor N

    2002-02-01

    Serial assays of qualitative (multiplex and nested) and quantitative PCR were carried out for detecting and estimating the level of BCR-ABL transcripts in 39 CML patients following bone marrow transplantation. Seven of these patients, who received donor lymphocyte infusions (DLIs) following to relapse, were also monitored. Quantitative estimates of BCR-ABL transcripts were obtained by co-amplification with a competitor sequence. Estimates of ABL transcripts were used, an internal control and the ratio BCR-ABL/ABL was thus estimated for evaluating the kinetics of residual clones. Twenty four patients were followed shortly after BMT; two of these patients were in cytogenetic relapse coexisting with very high BCR-ABL levels while other 22 were in clinical, haematologic and cytogenetic remission 2-42 months after BMT. In this latter group, seven patients showed a favourable clinical-haematological progression in association with molecular remission while in 14 patients quantitative PCR assays indicated molecular relapse that was not associated with an early cytogenetic-haematologic relapse. BCR-ABL/ABL levels could not be correlated with presence of GVHD in 24 patients after BMT. In all seven patients treated with DLI, high levels of transcripts were detected at least 4 months before the appearance of clinical haematological relapse. Following DLI, five of these patients showed decreasing transcript levels from 2 to 5 logs between 4 and 12 months. In eight other patients studied long after BMT, five showed molecular relapse up to 117 months post-BMT and only one showed cytogenetic relapse. Our findings indicated that quantitative estimates of BCR-ABL transcripts were valuable for monitoring minimal residual disease in each patient.

  5. Development and evaluation of a model-based downscatter compensation method for quantitative I-131 SPECT

    PubMed Central

    Song, Na; Du, Yong; He, Bin; Frey, Eric C.

    2011-01-01

    Purpose: The radionuclide 131I has found widespread use in targeted radionuclide therapy (TRT), partly due to the fact that it emits photons that can be imaged to perform treatment planning or posttherapy dose verification as well as beta rays that are suitable for therapy. In both the treatment planning and dose verification applications, it is necessary to estimate the activity distribution in organs or tumors at several time points. In vivo estimates of the 131I activity distribution at each time point can be obtained from quantitative single-photon emission computed tomography (QSPECT) images and organ activity estimates can be obtained either from QSPECT images or quantification of planar projection data. However, in addition to the photon used for imaging, 131I decay results in emission of a number of other higher-energy photons with significant abundances. These higher-energy photons can scatter in the body, collimator, or detector and be counted in the 364 keV photopeak energy window, resulting in reduced image contrast and degraded quantitative accuracy; these photons are referred to as downscatter. The goal of this study was to develop and evaluate a model-based downscatter compensation method specifically designed for the compensation of high-energy photons emitted by 131I and detected in the imaging energy window. Methods: In the evaluation study, we used a Monte Carlo simulation (MCS) code that had previously been validated for other radionuclides. Thus, in preparation for the evaluation study, we first validated the code for 131I imaging simulation by comparison with experimental data. Next, we assessed the accuracy of the downscatter model by comparing downscatter estimates with MCS results. Finally, we combined the downscatter model with iterative reconstruction-based compensation for attenuation (A) and scatter (S) and the full (D) collimator-detector response of the 364 keV photons to form a comprehensive compensation method. We evaluated this combined method in terms of quantitative accuracy using the realistic 3D NCAT phantom and an activity distribution obtained from patient studies. We compared the accuracy of organ activity estimates in images reconstructed with and without addition of downscatter compensation from projections with and without downscatter contamination. Results: We observed that the proposed method provided substantial improvements in accuracy compared to no downscatter compensation and had accuracies comparable to reconstructions from projections without downscatter contamination. Conclusions: The results demonstrate that the proposed model-based downscatter compensation method is effective and may have a role in quantitative 131I imaging. PMID:21815394

  6. Quantification of measles, mumps and rubella viruses using real-time quantitative TaqMan-based RT-PCR assay.

    PubMed

    Ammour, Y; Faizuloev, E; Borisova, T; Nikonova, A; Dmitriev, G; Lobodanov, S; Zverev, V

    2013-01-01

    In this study, a rapid quantitative method using TaqMan-based real-time reverse transcription-polymerase chain reaction (qPCR-RT) has been developed for estimating the titers of measles, mumps and rubella (MMR) viruses in infected cell culture supernatants. The qPCR-RT assay was demonstrated to be a specific, sensitive, efficient and reproducible method. For MMR viral samples obtained during MMR viral propagations in Vero cells at a different multiplicity of infection, titers determined by the qPCR-RT assay have been compared with estimates of infectious virus obtained by a traditional commonly used method for MMR viruses - 50% cell culture infective dose (CCID(50)) assay, in paired samples. Pearson analysis evidenced a significant correlation between both methods for a certain period after viral inoculation. Furthermore, the established qPCR-RT assay was faster and less-laborious. The developed method could be used as an alternative method or a supplementary tool for the routine titer estimation during MMR vaccine production. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Brain Tissue Compartment Density Estimated Using Diffusion-Weighted MRI Yields Tissue Parameters Consistent With Histology

    PubMed Central

    Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi

    2015-01-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639

  8. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    PubMed

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  9. Estimating unknown parameters in haemophilia using expert judgement elicitation.

    PubMed

    Fischer, K; Lewandowski, D; Janssen, M P

    2013-09-01

    The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.

  10. A method for energy window optimization for quantitative tasks that includes the effects of model-mismatch on bias: application to Y-90 bremsstrahlung SPECT imaging.

    PubMed

    Rong, Xing; Du, Yong; Frey, Eric C

    2012-06-21

    Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.

  11. Insights into Spray Development from Metered-Dose Inhalers Through Quantitative X-ray Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason-Smith, Nicholas; Duke, Daniel J.; Kastengren, Alan L.

    Typical methods to study pMDI sprays employ particle sizing or visible light diagnostics, which suffer in regions of high spray density. X-ray techniques can be applied to pharmaceutical sprays to obtain information unattainable by conventional particle sizing and light-based techniques. We present a technique for obtaining quantitative measurements of spray density in pMDI sprays. A monochromatic focused X-ray beam was used to perform quantitative radiography measurements in the near-nozzle region and plume of HFA-propelled sprays. Measurements were obtained with a temporal resolution of 0.184 ms and spatial resolution of 5 mu m. Steady flow conditions were reached after around 30more » ms for the formulations examined with the spray device used. Spray evolution was affected by the inclusion of ethanol in the formulation and unaffected by the inclusion of 0.1% drug by weight. Estimation of the nozzle exit density showed that vapour is likely to dominate the flow leaving the inhaler nozzle during steady flow. Quantitative measurements in pMDI sprays allow the determination of nozzle exit conditions that are difficult to obtain experimentally by other means. Measurements of these nozzle exit conditions can improve understanding of the atomization mechanisms responsible for pMDI spray droplet and particle formation.« less

  12. Rapid Quantitation of Ascorbic and Folic Acids in SRM 3280 Multivitamin/Multielement Tablets using Flow-Injection Tandem Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhandari, Deepak; Kertesz, Vilmos; Van Berkel, Gary J

    RATIONALE: Ascorbic acid (AA) and folic acid (FA) are water-soluble vitamins and are usually fortified in food and dietary supplements. For the safety of human health, proper intake of these vitamins is recommended. Improvement in the analysis time required for the quantitative determination of these vitamins in food and nutritional formulations is desired. METHODS: A simple and fast (~5 min) in-tube sample preparation was performed, independently for FA and AA, by mixing extraction solvent with a powdered sample aliquot followed by agitation, centrifugation, and filtration to recover an extract for analysis. Quantitative detection was achieved by flow-injection (1 L injectionmore » volume) electrospray ionization tandem mass spectrometry (ESI-MS/MS) in negative ion mode using the method of standard addition. RESULTS: Method of standard addition was employed for the quantitative estimation of each vitamin in a sample extract. At least 2 spiked and 1 non-spiked sample extract were injected in triplicate for each quantitative analysis. Given an injection-to-injection interval of approximately 2 min, about 18 min was required to complete the quantitative estimation of each vitamin. The concentration values obtained for the respective vitamins in the standard reference material (SRM) 3280 using this approach were within the statistical range of the certified values provided in the NIST Certificate of Analysis. The estimated limit of detections of FA and AA were 13 and 5.9 ng/g, respectively. CONCLUSIONS: Flow-injection ESI-MS/MS was successfully applied for the rapid quantitation of FA and AA in SRM 3280 multivitamin/multielement tablets.« less

  13. A comparison of different category scales for estimating disease severity

    USDA-ARS?s Scientific Manuscript database

    Plant pathologists most often obtain quantitative information on disease severity using visual assessments. Category scales are widely used for assessing disease severity, including for screening germplasm. The most widely used category scale is the Horsfall-Barratt (H-B) scale, but reports show tha...

  14. On A Problem Of Propagation Of Shock Waves Generated By Explosive Volcanic Eruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusev, V. A.; Sobissevitch, A. L.

    2008-06-24

    Interdisciplinary study of flows of matter and energy in geospheres has become one of the most significant advances in Earth sciences. It is carried out by means of direct quantitative estimations based on detailed analysis of geological and geophysical observations and experimental data. The actual contribution is the interdisciplinary study of nonlinear acoustics and physical volcanology dedicated to shock wave propagation in a viscous and inhomogeneous medium. The equations governing evolution of shock waves with an arbitrary initial profile and an arbitrary cross-section of a beam are obtained. For the case of low viscous medium, the asymptotic solution meant tomore » calculate a profile of a shock wave in an arbitrary point has been derived. The analytical solution of the problem on propagation of shock pulses from atmosphere into a two-phase fluid-saturated geophysical medium is analysed. Quantitative estimations were carried out with respect to experimental results obtained in the course of real explosive volcanic eruptions.« less

  15. Development of an agricultural job-exposure matrix for British Columbia, Canada.

    PubMed

    Wood, David; Astrakianakis, George; Lang, Barbara; Le, Nhu; Bert, Joel

    2002-09-01

    Farmers in British Columbia (BC), Canada have been shown to have unexplained elevated proportional mortality rates for several cancers. Because agricultural exposures have never been documented systematically in BC, a quantitative agricultural Job-exposure matrix (JEM) was developed containing exposure assessments from 1950 to 1998. This JEM was developed to document historical exposures and to facilitate future epidemiological studies. Available information regarding BC farming practices was compiled and checklists of potential exposures were produced for each crop. Exposures identified included chemical, biological, and physical agents. Interviews with farmers and agricultural experts were conducted using the checklists as a starting point. This allowed the creation of an initial or 'potential' JEM based on three axes: exposure agent, 'type of work' and time. The 'type of work' axis was determined by combining several variables: region, crop, job title and task. This allowed for a complete description of exposures. Exposure assessments were made quantitatively, where data allowed, or by a dichotomous variable (exposed/unexposed). Quantitative calculations were divided into re-entry and application scenarios. 'Re-entry' exposures were quantified using a standard exposure model with some modification while application exposure estimates were derived using data from the North American Pesticide Handlers Exposure Database (PHED). As expected, exposures differed between crops and job titles both quantitatively and qualitatively. Of the 290 agents included in the exposure axis; 180 were pesticides. Over 3000 estimates of exposure were conducted; 50% of these were quantitative. Each quantitative estimate was at the daily absorbed dose level. Exposure estimates were then rated as high, medium, or low based on comparing them with their respective oral chemical reference dose (RfD) or Acceptable Daily Intake (ADI). This data was mainly obtained from the US Environmental Protection Agency (EPA) Integrated Risk Information System database. Of the quantitative estimates, 74% were rated as low (< 100%) and only 10% were rated as high (>500%). The JEM resulting from this study fills a void concerning exposures for BC farmers and farm workers. While only limited validation of assessments were possible, this JEM can serve as a benchmark for future studies. Preliminary analysis at the BC Cancer Agency (BCCA) using the JEM with prostate cancer records from a large cancer and occupation study/survey has already shown promising results. Development of this JEM provides a useful model for developing historical quantitative exposure estimates where is very little documented information available.

  16. Fission products and nuclear fuel behaviour under severe accident conditions part 3: Speciation of fission products in the VERDON-1 sample

    NASA Astrophysics Data System (ADS)

    Le Gall, C.; Geiger, E.; Gallais-During, A.; Pontillon, Y.; Lamontagne, J.; Hanus, E.; Ducros, G.

    2017-11-01

    Qualitative and quantitative analyses on the VERDON-1 sample made it possible to obtain valuable information on fission product behaviour in the fuel during the test. A promising methodology based on the quantitative results of post-test characterisations has been implemented to assess the release fraction of non γ-emitter fission products. The order of magnitude of the estimated release fractions for each fission product was consistent with their class of volatility.

  17. Determination of Detection Limits and Quantitation Limits for Compounds in a Database of GC/MS by FUMI Theory

    PubMed Central

    Nakashima, Shinya; Hayashi, Yuzuru

    2016-01-01

    The aim of this paper is to propose a stochastic method for estimating the detection limits (DLs) and quantitation limits (QLs) of compounds registered in a database of a GC/MS system and prove its validity with experiments. The approach described in ISO 11843 Part 7 is adopted here as an estimation means of DL and QL, and the decafluorotriphenylphosphine (DFTPP) tuning and retention time locking are carried out for adjusting the system. Coupled with the data obtained from the system adjustment experiments, the information (noise and signal of chromatograms and calibration curves) stored in the database is used for the stochastic estimation, dispensing with the repetition measurements. Of sixty-six pesticides, the DL values obtained by the ISO method were compared with those from the statistical approach and the correlation between them was observed to be excellent with the correlation coefficient of 0.865. The accuracy of the method proposed was also examined and concluded to be satisfactory as well. The samples used are commercial products of pesticides mixtures and the uncertainty from sample preparation processes is not taken into account. PMID:27162706

  18. Bayesian assessment of overtriage and undertriage at a level I trauma centre.

    PubMed

    DiDomenico, Paul B; Pietzsch, Jan B; Paté-Cornell, M Elisabeth

    2008-07-13

    We analysed the trauma triage system at a specific level I trauma centre to assess rates of over- and undertriage and to support recommendations for system improvements. The triage process is designed to estimate the severity of patient injury and allocate resources accordingly, with potential errors of overestimation (overtriage) consuming excess resources and underestimation (undertriage) potentially leading to medical errors.We first modelled the overall trauma system using risk analysis methods to understand interdependencies among the actions of the participants. We interviewed six experienced trauma surgeons to obtain their expert opinion of the over- and undertriage rates occurring in the trauma centre. We then assessed actual over- and undertriage rates in a random sample of 86 trauma cases collected over a six-week period at the same centre. We employed Bayesian analysis to quantitatively combine the data with the prior probabilities derived from expert opinion in order to obtain posterior distributions. The results were estimates of overtriage and undertriage in 16.1 and 4.9% of patients, respectively. This Bayesian approach, which provides a quantitative assessment of the error rates using both case data and expert opinion, provides a rational means of obtaining a best estimate of the system's performance. The overall approach that we describe in this paper can be employed more widely to analyse complex health care delivery systems, with the objective of reduced errors, patient risk and excess costs.

  19. Reproducibility and Accuracy of Quantitative Myocardial Blood Flow Using 82Rb-PET: Comparison with 13N-Ammonia

    PubMed Central

    Fakhri, Georges El

    2011-01-01

    82Rb cardiac PET allows the assessment of myocardial perfusion using a column generator in clinics that lack a cyclotron. We and others have previously shown that quantitation of myocardial blood flow (MBF) and coronary flow reserve (CFR) is feasible using dynamic 82Rb PET and factor and compartment analyses. The aim of the present work was to determine the intra- and inter-observer variability of MBF estimation using 82Rb PET as well as the reproducibility of our generalized factor + compartment analyses methodology to estimate MBF and assess its accuracy by comparing, in the same subjects, 82Rb estimates of MBF to those obtained using 13N-ammonia. Methods Twenty-two subjects were included in the reproducibility and twenty subjects in the validation study. Patients were injected with 60±5mCi of 82Rb and imaged dynamically for 6 minutes at rest and during dipyridamole stress Left and right ventricular (LV+RV) time-activity curves were estimated by GFADS and used as input to a 2-compartment kinetic analysis that estimates parametric maps of myocardial tissue extraction (K1) and egress (k2), as well as LV+RV contributions (fv,rv). Results Our results show excellent reproducibility of the quantitative dynamic approach itself with coefficients of repeatability of 1.7% for estimation of MBF at rest, 1.4% for MBF at peak stress and 2.8% for CFR estimation. The inter-observer reproducibility between the four observers that participated in this study was also very good with correlation coefficients greater than 0.87 between any two given observers when estimating coronary flow reserve. The reproducibility of MBF in repeated 82Rb studies was good at rest and excellent at peak stress (r2=0.835). Furthermore, the slope of the correlation line was very close to 1 when estimating stress MBF and CFR in repeated 82Rb studies. The correlation between myocardial flow estimates obtained at rest and during peak stress in 82Rb and 13N-ammonia studies was very good at rest (r2=0.843) and stress (r2=0.761). The Bland-Altman plots show no significant presence of proportional error at rest or stress, nor a dependence of the variations on the amplitude of the myocardial blood flow at rest or stress. A small systematic overestimation of 13N-ammonia MBF was observed with 82Rb at rest (0.129 ml/g/min) and the opposite, i.e., underestimation, at stress (0.22 ml/g/min). Conclusions Our results show that absolute quantitation of myocardial bloof flow is reproducible and accurate with 82Rb dynamic cardiac PET as compared to 13N-ammonia. The reproducibility of the quantitation approach itself was very good as well as inter-observer reproducibility. PMID:19525467

  20. Analysis of the optimal laminated target made up of discrete set of materials

    NASA Technical Reports Server (NTRS)

    Aptukov, Valery N.; Belousov, Valentin L.

    1991-01-01

    A new class of problems was analyzed to estimate an optimal structure of laminated targets fabricated from the specified set of homogeneous materials. An approximate description of the perforation process is based on the model of radial hole extension. The problem is solved by using the needle-type variation technique. The desired optimization conditions and quantitative/qualitative estimations of optimal targets were obtained and are discussed using specific examples.

  1. Toward quantitative forecasts of volcanic ash dispersal: Using satellite retrievals for optimal estimation of source terms

    NASA Astrophysics Data System (ADS)

    Zidikheri, Meelis J.; Lucas, Christopher; Potts, Rodney J.

    2017-08-01

    Airborne volcanic ash is a hazard to aviation. There is an increasing demand for quantitative forecasts of ash properties such as ash mass load to allow airline operators to better manage the risks of flying through airspace likely to be contaminated by ash. In this paper we show how satellite-derived mass load information at times prior to the issuance of the latest forecast can be used to estimate various model parameters that are not easily obtained by other means such as the distribution of mass of the ash column at the volcano. This in turn leads to better forecasts of ash mass load. We demonstrate the efficacy of this approach using several case studies.

  2. Correlations between quantitative fat–water magnetic resonance imaging and computed tomography in human subcutaneous white adipose tissue

    PubMed Central

    Gifford, Aliya; Walker, Ronald C.; Towse, Theodore F.; Brian Welch, E.

    2015-01-01

    Abstract. Beyond estimation of depot volumes, quantitative analysis of adipose tissue properties could improve understanding of how adipose tissue correlates with metabolic risk factors. We investigated whether the fat signal fraction (FSF) derived from quantitative fat–water magnetic resonance imaging (MRI) scans at 3.0 T correlates to CT Hounsfield units (HU) of the same tissue. These measures were acquired in the subcutaneous white adipose tissue (WAT) at the umbilical level of 21 healthy adult subjects. A moderate correlation exists between MRI- and CT-derived WAT values for all subjects, R2=0.54, p<0.0001, with a slope of −2.6, (95% CI [−3.3,−1.8]), indicating that a decrease of 1 HU equals a mean increase of 0.38% FSF. We demonstrate that FSF estimates obtained using quantitative fat–water MRI techniques correlate with CT HU values in subcutaneous WAT, and therefore, MRI-based FSF could be used as an alternative to CT HU for assessing metabolic risk factors. PMID:26702407

  3. MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.

    PubMed

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.

  4. Predictive Heterosis in Multibreed Evaluations Using Quantitative and Molecular Approaches

    USDA-ARS?s Scientific Manuscript database

    Heterosis is the extra genetic boost in performance obtained by crossing two cattle breeds. It is an important tool for increasing the efficiency of beef production. It is also important to adjust data used to calculate genetic evaluations for differences in heterosis. Good estimates of heterosis...

  5. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    PubMed

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.

  6. Unbiased estimation of oceanic mean rainfall from satellite borne radiometer measurements

    NASA Technical Reports Server (NTRS)

    Mittal, M. C.

    1981-01-01

    The statistical properties of the radar derived rainfall obtained during the GARP Atlantic Tropical Experiment (GATE) are used to derive quantitative estimates of the spatial and temporal sampling errors associated with estimating rainfall from brightness temperature measurements such as would be obtained from a satelliteborne microwave radiometer employing a practical size antenna aperture. A basis for a method of correcting the so called beam filling problem, i.e., for the effect of nonuniformity of rainfall over the radiometer beamwidth is provided. The method presented employs the statistical properties of the observations themselves without need for physical assumptions beyond those associated with the radiative transfer model. The simulation results presented offer a validation of the estimated accuracy that can be achieved and the graphs included permit evaluation of the effect of the antenna resolution on both the temporal and spatial sampling errors.

  7. Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.

    PubMed

    Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J

    1996-07-01

    We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.

  8. A comparison of manual and quantitative elbow strength testing.

    PubMed

    Shahgholi, Leili; Bengtson, Keith A; Bishop, Allen T; Shin, Alexander Y; Spinner, Robert J; Basford, Jeffrey R; Kaufman, Kenton R

    2012-10-01

    The aim of this study was to compare the clinical ratings of elbow strength obtained by skilled clinicians with objective strength measurement obtained through quantitative testing. A retrospective comparison of subject clinical records with quantitative strength testing results in a motion analysis laboratory was conducted. A total of 110 individuals between the ages of 8 and 65 yrs with traumatic brachial plexus injuries were identified. Patients underwent manual muscle strength testing as assessed on the 5-point British Medical Research Council Scale (5/5, normal; 0/5, absent) and quantitative elbow flexion and extension strength measurements. A total of 92 subjects had elbow flexion testing. Half of the subjects clinically assessed as having normal (5/5) elbow flexion strength on manual muscle testing exhibited less than 42% of their age-expected strength on quantitative testing. Eighty-four subjects had elbow extension strength testing. Similarly, half of those displaying normal elbow extension strength on manual muscle testing were found to have less than 62% of their age-expected values on quantitative testing. Significant differences between manual muscle testing and quantitative findings were not detected for the lesser (0-4) strength grades. Manual muscle testing, even when performed by experienced clinicians, may be more misleading than expected for subjects graded as having normal (5/5) strength. Manual muscle testing estimates for the lesser strength grades (1-4/5) seem reasonably accurate.

  9. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.

    PubMed

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Measurement of the Dynamic Shear Modulus of Mouse Brain Tissue In Vivo By Magnetic Resonance Elastography

    PubMed Central

    Atay, Stefan M.; Kroenke, Christopher D.; Sabet, Arash; Bayly, Philip V.

    2008-01-01

    In this study, the magnetic resonance elastography (MRE) technique was used to estimate the dynamic shear modulus of mouse brain tissue in vivo. The technique allows visualization and measurement of mechanical shear waves excited by lateral vibration of the skull. Quantitative measurements of displacement in three dimensions (3-D) during vibration at 1200 Hz were obtained by applying oscillatory magnetic field gradients at the same frequency during an MR imaging sequence. Contrast in the resulting phase images of the mouse brain is proportional to displacement. To obtain estimates of shear modulus, measured displacement fields were fitted to the shear wave equation. Validation of the procedure was performed on gel characterized by independent rheometry tests and on data from finite element simulations. Brain tissue is, in reality, viscoelastic and nonlinear. The current estimates of dynamic shear modulus are strictly relevant only to small oscillations at a specific frequency, but these estimates may be obtained at high frequencies (and thus high deformation rates), non-invasively throughout the brain. These data complement measurements of nonlinear viscoelastic properties obtained by others at slower rates, either ex vivo or invasively. PMID:18412500

  11. Performance prediction of electrohydrodynamic thrusters by the perturbation method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shibata, H., E-mail: shibata@daedalus.k.u-tokyo.ac.jp; Watanabe, Y.; Suzuki, K.

    2016-05-15

    In this paper, we present a novel method for analyzing electrohydrodynamic (EHD) thrusters. The method is based on a perturbation technique applied to a set of drift-diffusion equations, similar to the one introduced in our previous study on estimating breakdown voltage. The thrust-to-current ratio is generalized to represent the performance of EHD thrusters. We have compared the thrust-to-current ratio obtained theoretically with that obtained from the proposed method under atmospheric air conditions, and we have obtained good quantitative agreement. Also, we have conducted a numerical simulation in more complex thruster geometries, such as the dual-stage thruster developed by Masuyama andmore » Barrett [Proc. R. Soc. A 469, 20120623 (2013)]. We quantitatively clarify the fact that if the magnitude of a third electrode voltage is low, the effective gap distance shortens, whereas if the magnitude of the third electrode voltage is sufficiently high, the effective gap distance lengthens.« less

  12. Contact inspection of Si nanowire with SEM voltage contrast

    NASA Astrophysics Data System (ADS)

    Ohashi, Takeyoshi; Yamaguchi, Atsuko; Hasumi, Kazuhisa; Ikota, Masami; Lorusso, Gian; Horiguchi, Naoto

    2018-03-01

    A methodology to evaluate the electrical contact between nanowire (NW) and source/drain (SD) in NW FETs was investigated with SEM voltage contrast (VC). The electrical defects were robustly detected by VC. The validity of the inspection result was verified by TEM physical observations. Moreover, estimation of the parasitic resistance and capacitance was achieved from the quantitative analysis of VC images which were acquired with different scan conditions of electron beam (EB). A model considering the dynamics of EB-induce charging was proposed to calculate the VC. The resistance and capacitance can be determined by comparing the model-based VC with experimentally obtained VC. Quantitative estimation of resistance and capacitance would be valuable not only for more accurate inspection, but also for identification of the defect point.

  13. Quantitative Comparison of PET and Bremsstrahlung SPECT for Imaging the In Vivo Yttrium-90 Microsphere Distribution after Liver Radioembolization

    PubMed Central

    Elschot, Mattijs; Vermolen, Bart J.; Lam, Marnix G. E. H.; de Keizer, Bart; van den Bosch, Maurice A. A. J.; de Jong, Hugo W. A. M.

    2013-01-01

    Background After yttrium-90 (90Y) microsphere radioembolization (RE), evaluation of extrahepatic activity and liver dosimetry is typically performed on 90Y Bremsstrahlung SPECT images. Since these images demonstrate a low quantitative accuracy, 90Y PET has been suggested as an alternative. The aim of this study is to quantitatively compare SPECT and state-of-the-art PET on the ability to detect small accumulations of 90Y and on the accuracy of liver dosimetry. Methodology/Principal Findings SPECT/CT and PET/CT phantom data were acquired using several acquisition and reconstruction protocols, including resolution recovery and Time-Of-Flight (TOF) PET. Image contrast and noise were compared using a torso-shaped phantom containing six hot spheres of various sizes. The ability to detect extra- and intrahepatic accumulations of activity was tested by quantitative evaluation of the visibility and unique detectability of the phantom hot spheres. Image-based dose estimates of the phantom were compared to the true dose. For clinical illustration, the SPECT and PET-based estimated liver dose distributions of five RE patients were compared. At equal noise level, PET showed higher contrast recovery coefficients than SPECT. The highest contrast recovery coefficients were obtained with TOF PET reconstruction including resolution recovery. All six spheres were consistently visible on SPECT and PET images, but PET was able to uniquely detect smaller spheres than SPECT. TOF PET-based estimates of the dose in the phantom spheres were more accurate than SPECT-based dose estimates, with underestimations ranging from 45% (10-mm sphere) to 11% (37-mm sphere) for PET, and 75% to 58% for SPECT, respectively. The differences between TOF PET and SPECT dose-estimates were supported by the patient data. Conclusions/Significance In this study we quantitatively demonstrated that the image quality of state-of-the-art PET is superior over Bremsstrahlung SPECT for the assessment of the 90Y microsphere distribution after radioembolization. PMID:23405207

  14. A comparative quantitative analysis of the IDEAL (iterative decomposition of water and fat with echo asymmetry and least-squares estimation) and the CHESS (chemical shift selection suppression) techniques in 3.0 T L-spine MRI

    NASA Astrophysics Data System (ADS)

    Kim, Eng-Chan; Cho, Jae-Hwan; Kim, Min-Hye; Kim, Ki-Hong; Choi, Cheon-Woong; Seok, Jong-min; Na, Kil-Ju; Han, Man-Seok

    2013-03-01

    This study was conducted on 20 patients who had undergone pedicle screw fixation between March and December 2010 to quantitatively compare a conventional fat suppression technique, CHESS (chemical shift selection suppression), and a new technique, IDEAL (iterative decomposition of water and fat with echo asymmetry and least squares estimation). The general efficacy and usefulness of the IDEAL technique was also evaluated. Fat-suppressed transverse-relaxation-weighed images and longitudinal-relaxation-weighted images were obtained before and after contrast injection by using these two techniques with a 1.5T MR (magnetic resonance) scanner. The obtained images were analyzed for image distortion, susceptibility artifacts and homogenous fat removal in the target region. The results showed that the image distortion due to the susceptibility artifacts caused by implanted metal was lower in the images obtained using the IDEAL technique compared to those obtained using the CHESS technique. The results of a qualitative analysis also showed that compared to the CHESS technique, fewer susceptibility artifacts and more homogenous fat removal were found in the images obtained using the IDEAL technique in a comparative image evaluation of the axial plane images before and after contrast injection. In summary, compared to the CHESS technique, the IDEAL technique showed a lower occurrence of susceptibility artifacts caused by metal and lower image distortion. In addition, more homogenous fat removal was shown in the IDEAL technique.

  15. Real-Time Estimation of Amplitude and Group Delay Distortion in a PSK Line-of-Sight Communications Channel.

    DTIC Science & Technology

    1984-06-01

    appears to have a progressively more difinitive concave minimum as the amount of distortion in the channel increases. These measurements illustrate...apparent nonlinear behavior in this relationship, it S 149 might not be possible to obtain a useful quantitative characterization. The next logical step in

  16. Evaluation of macrozone dimensions by ultrasound and EBSD techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreau, Andre, E-mail: Andre.Moreau@cnrc-nrc.gc.ca; Toubal, Lotfi; Ecole de technologie superieure, 1100, rue Notre-Dame Ouest, Montreal, QC, Canada H3C 1K3

    2013-01-15

    Titanium alloys are known to have texture heterogeneities, i.e. regions much larger than the grain dimensions, where the local orientation distribution of the grains differs from one region to the next. The electron backscattering diffraction (EBSD) technique is the method of choice to characterize these macro regions, which are called macrozones. Qualitatively, the images obtained by EBSD show that these macrozones may be larger or smaller, elongated or equiaxed. However, often no well-defined boundaries are observed between the macrozones and it is very hard to obtain objective and quantitative estimates of the macrozone dimensions from these data. In the presentmore » work, we present a novel, non-destructive ultrasonic technique that provides objective and quantitative characteristic dimensions of the macrozones. The obtained dimensions are based on the spatial autocorrelation function of fluctuations in the sound velocity. Thus, a pragmatic definition of macrozone dimensions naturally arises from the ultrasonic measurement. This paper has three objectives: 1) to disclose the novel, non-destructive ultrasonic technique to measure macrozone dimensions, 2) to propose a quantitative and objective definition of macrozone dimensions adapted to and arising from the ultrasonic measurement, and which is also applicable to the orientation data obtained by EBSD, and 3) to compare the macrozone dimensions obtained using the two techniques on two samples of the near-alpha titanium alloy IMI834. In addition, it was observed that macrozones may present a semi-periodical arrangement. - Highlights: Black-Right-Pointing-Pointer Discloses a novel, ultrasonic NDT technique to measure macrozone dimensions Black-Right-Pointing-Pointer Proposes a quantitative and objective definition of macrozone dimensions Black-Right-Pointing-Pointer Compares macrozone dimensions obtained using EBSD and ultrasonics on 2 Ti samples Black-Right-Pointing-Pointer Observes that macrozones may have a semi-periodical arrangement.« less

  17. MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner

    PubMed Central

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415

  18. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.

  19. Comparison of quantitative myocardial perfusion imaging CT to fluorescent microsphere-based flow from high-resolution cryo-images

    NASA Astrophysics Data System (ADS)

    Eck, Brendan L.; Fahmi, Rachid; Levi, Jacob; Fares, Anas; Wu, Hao; Li, Yuemeng; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) has the potential to provide quantitative measures of myocardial blood flow (MBF) which can aid the diagnosis of coronary artery disease. We evaluated the quantitative accuracy of MPI-CT in a porcine model of balloon-induced LAD coronary artery ischemia guided by fractional flow reserve (FFR). We quantified MBF at baseline (FFR=1.0) and under moderate ischemia (FFR=0.7) using MPI-CT and compared to fluorescent microsphere-based MBF from high-resolution cryo-images. Dynamic, contrast-enhanced CT images were obtained using a spectral detector CT (Philips Healthcare). Projection-based mono-energetic images were reconstructed and processed to obtain MBF. Three MBF quantification approaches were evaluated: singular value decomposition (SVD) with fixed Tikhonov regularization (ThSVD), SVD with regularization determined by the L-Curve criterion (LSVD), and Johnson-Wilson parameter estimation (JW). The three approaches over-estimated MBF compared to cryo-images. JW produced the most accurate MBF, with average error 33.3+/-19.2mL/min/100g, whereas LSVD and ThSVD had greater over-estimation, 59.5+/-28.3mL/min/100g and 78.3+/-25.6 mL/min/100g, respectively. Relative blood flow as assessed by a flow ratio of LAD-to-remote myocardium was strongly correlated between JW and cryo-imaging, with R2=0.97, compared to R2=0.88 and 0.78 for LSVD and ThSVD, respectively. We assessed tissue impulse response functions (IRFs) from each approach for sources of error. While JW was constrained to physiologic solutions, both LSVD and ThSVD produced IRFs with non-physiologic properties due to noise. The L-curve provided noise-adaptive regularization but did not eliminate non-physiologic IRF properties or optimize for MBF accuracy. These findings suggest that model-based MPI-CT approaches may be more appropriate for quantitative MBF estimation and that cryo-imaging can support the development of MPI-CT by providing spatial distributions of MBF.

  20. Light scattering application for quantitative estimation of apoptosis

    NASA Astrophysics Data System (ADS)

    Bilyy, Rostyslav O.; Stoika, Rostyslav S.; Getman, Vasyl B.; Bilyi, Olexander I.

    2004-05-01

    Estimation of cell proliferation and apoptosis are in focus of instrumental methods used in modern biomedical sciences. Present study concerns monitoring of functional state of cells, specifically the development of their programmed death or apoptosis. The available methods for such purpose are either very expensive, or require time-consuming operations. Their specificity and sensitivity are frequently not sufficient for making conclusions which could be used in diagnostics or treatment monitoring. We propose a novel method for apoptosis measurement based on quantitative determination of cellular functional state taking into account their physical characteristics. This method uses the patented device -- laser microparticle analyser PRM-6 -- for analyzing light scattering by the microparticles, including cells. The method gives an opportunity for quick, quantitative, simple (without complicated preliminary cell processing) and relatively cheap measurement of apoptosis in cellular population. The elaborated method was used for studying apoptosis expression in murine leukemia cells of L1210 line and human lymphoblastic leukemia cells of K562 line. The results obtained by the proposed method permitted measuring cell number in tested sample, detecting and quantitative characterization of functional state of cells, particularly measuring the ratio of the apoptotic cells in suspension.

  1. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  2. Estimation of brittleness indices for pay zone determination in a shale-gas reservoir by using elastic properties obtained from micromechanics

    NASA Astrophysics Data System (ADS)

    Lizcano-Hernández, Edgar G.; Nicolás-López, Rubén; Valdiviezo-Mijangos, Oscar C.; Meléndez-Martínez, Jaime

    2018-04-01

    The brittleness indices (BI) of gas-shales are computed by using their effective mechanical properties obtained from micromechanical self-consistent modeling with the purpose of assisting in the identification of the more-brittle regions in shale-gas reservoirs, i.e., the so-called ‘pay zone’. The obtained BI are plotted in lambda-rho versus mu-rho λ ρ -μ ρ and Young’s modulus versus Poisson’s ratio E-ν ternary diagrams along with the estimated elastic properties from log data of three productive shale-gas wells where the pay zone is already known. A quantitative comparison between the obtained BI and the well log data allows for the delimitation of regions where BI values could indicate the best reservoir target in regions with the highest shale-gas exploitation potential. Therefore, a range of values for elastic properties and brittleness indexes that can be used as a data source to support the well placement procedure is obtained.

  3. Secular trends of infectious disease mortality in The Netherlands, 1911-1978: quantitative estimates of changes coinciding with the introduction of antibiotics.

    PubMed

    Mackenbach, J P; Looman, C W

    1988-09-01

    Secular trends of mortality from 21 infectious diseases in the Netherlands were studied by inspection of age/sex-standardized mortality curves and by log-linear regression analysis. An attempt was made to obtain quantitative estimates for changes coinciding with the introduction of antibiotics. Two possible types of effect were considered: a sharp reduction of mortality at the moment of the introduction of antibiotics, and a longer lasting (acceleration of) mortality decline after the introduction. Changes resembling the first type of effect were possibly present for many infectious diseases, but were difficult to measure exactly, due to late effects on mortality of World War II. Changes resembling the second type of effect were present in 16 infectious diseases and were sometimes quite large. For example, estimated differences in per cent per annum mortality change were 10% or larger for puerperal fever, scarlet fever, rheumatic fever, erysipelas, otitis media, tuberculosis, and bacillary dysentery. No acceleration of mortality decline after the introduction of antibiotics was present in mortality from 'all other diseases'. Although the exact contribution of antibiotics to the observed changes cannot be inferred from this time trend analysis, the quantitative estimates of the changes show that even a partial contribution would represent a substantial effect of antibiotics on mortality from infectious diseases in the Netherlands.

  4. The effects of dominance, regular inbreeding and sampling design on Q(ST), an estimator of population differentiation for quantitative traits.

    PubMed

    Goudet, Jérôme; Büchi, Lucie

    2006-02-01

    To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) < F(ST) is predicted under homogenizing selection. However, nonadditive effects can alter these predictions. Here, we investigate the influence of dominance on the relation between Q(ST) and F(ST) for neutral traits. Using analytical results and computer simulations, we show that dominance generally deflates Q(ST) relative to F(ST). Under inbreeding, the effect of dominance vanishes, and we show that for selfing species, a better estimate of Q(ST) is obtained from selfed families than from half-sib families. We also compare several sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.

  5. Methodology for estimating dietary data from the semi-quantitative food frequency questionnaire of the Mexican National Health and Nutrition Survey 2012.

    PubMed

    Ramírez-Silva, Ivonne; Jiménez-Aguilar, Alejandra; Valenzuela-Bravo, Danae; Martinez-Tapia, Brenda; Rodríguez-Ramírez, Sonia; Gaona-Pineda, Elsa Berenice; Angulo-Estrada, Salomón; Shamah-Levy, Teresa

    2016-01-01

    To describe the methodology used to clean up and estimate dietary intake (DI) data from the Semi-Quantitative Food Frequency Questionnaire (SFFQ) of the Mexican National Health and Nutrition Survey 2012. DI was collected through a shortterm SFFQ regarding 140 foods (from October 2011 to May 2012). Energy and nutrient intake was calculated according to a nutrient database constructed specifically for the SFFQ. A total of 133 nutrients including energy and fiber were generated from SFFQ data. Between 4.8 and 9.6% of the survey sample was excluded as a result of the cleaning process.Valid DI data were obtained regarding energy and nutrients consumed by 1 212 pre-school children, 1 323 school children, 1 961 adolescents, 2 027 adults and 526 older adults. We documented the methodology used to clean up and estimate DI from the SFFQ used in national dietary assessments in Mexico.

  6. Quantitative assessment of medical waste generation in the capital city of Bangladesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patwary, Masum A.; O'Hare, William Thomas; Street, Graham

    2009-08-15

    There is a concern that mismanagement of medical waste in developing countries may be a significant risk factor for disease transmission. Quantitative estimation of medical waste generation is needed to estimate the potential risk and as a basis for any waste management plan. Dhaka City, the capital of Bangladesh, is an example of a major city in a developing country where there has been no rigorous estimation of medical waste generation based upon a thorough scientific study. These estimates were obtained by stringent weighing of waste in a carefully chosen, representative, sample of HCEs, including non-residential diagnostic centres. This studymore » used a statistically designed sampling of waste generation in a broad range of Health Care Establishments (HCEs) to indicate that the amount of waste produced in Dhaka can be estimated to be 37 {+-} 5 ton per day. The proportion of this waste that would be classified as hazardous waste by World Health Organisation (WHO) guidelines was found to be approximately 21%. The amount of waste, and the proportion of hazardous waste, was found to vary significantly with the size and type of HCE.« less

  7. Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation with Brain Iron in Normal Aging

    PubMed Central

    Poynton, Clare; Jenkinson, Mark; Adalsteinsson, Elfar; Sullivan, Edith V.; Pfefferbaum, Adolf; Wells, William

    2015-01-01

    There is increasing evidence that iron deposition occurs in specific regions of the brain in normal aging and neurodegenerative disorders such as Parkinson's, Huntington's, and Alzheimer's disease. Iron deposition changes the magnetic susceptibility of tissue, which alters the MR signal phase, and allows estimation of susceptibility differences using quantitative susceptibility mapping (QSM). We present a method for quantifying susceptibility by inversion of a perturbation model, or ‘QSIP’. The perturbation model relates phase to susceptibility using a kernel calculated in the spatial domain, in contrast to previous Fourier-based techniques. A tissue/air susceptibility atlas is used to estimate B0 inhomogeneity. QSIP estimates in young and elderly subjects are compared to postmortem iron estimates, maps of the Field-Dependent Relaxation Rate Increase (FDRI), and the L1-QSM method. Results for both groups showed excellent agreement with published postmortem data and in-vivo FDRI: statistically significant Spearman correlations ranging from Rho = 0.905 to Rho = 1.00 were obtained. QSIP also showed improvement over FDRI and L1-QSM: reduced variance in susceptibility estimates and statistically significant group differences were detected in striatal and brainstem nuclei, consistent with age-dependent iron accumulation in these regions. PMID:25248179

  8. Principles, performance, and applications of spectral reconstitution (SR) in quantitative analysis of oils by Fourier transform infrared spectroscopy (FT-IR).

    PubMed

    García-González, Diego L; Sedman, Jacqueline; van de Voort, Frederik R

    2013-04-01

    Spectral reconstitution (SR) is a dilution technique developed to facilitate the rapid, automated, and quantitative analysis of viscous oil samples by Fourier transform infrared spectroscopy (FT-IR). This technique involves determining the dilution factor through measurement of an absorption band of a suitable spectral marker added to the diluent, and then spectrally removing the diluent from the sample and multiplying the resulting spectrum to compensate for the effect of dilution on the band intensities. The facsimile spectrum of the neat oil thus obtained can then be qualitatively or quantitatively analyzed for the parameter(s) of interest. The quantitative performance of the SR technique was examined with two transition-metal carbonyl complexes as spectral markers, chromium hexacarbonyl and methylcyclopentadienyl manganese tricarbonyl. The estimation of the volume fraction (VF) of the diluent in a model system, consisting of canola oil diluted to various extents with odorless mineral spirits, served as the basis for assessment of these markers. The relationship between the VF estimates and the true volume fraction (VF(t)) was found to be strongly dependent on the dilution ratio and also depended, to a lesser extent, on the spectral resolution. These dependences are attributable to the effect of changes in matrix polarity on the bandwidth of the ν(CO) marker bands. Excellent VF(t) estimates were obtained by making a polarity correction devised with a variance-spectrum-delineated correction equation. In the absence of such a correction, SR was shown to introduce only a minor and constant bias, provided that polarity differences among all the diluted samples analyzed were minimal. This bias can be built into the calibration of a quantitative FT-IR analytical method by subjecting appropriate calibration standards to the same SR procedure as the samples to be analyzed. The primary purpose of the SR technique is to simplify preparation of diluted samples such that only approximate proportions need to be adhered to, rather than using exact weights or volumes, the marker accounting for minor variations. Additional applications discussed include the use of the SR technique in extraction-based, quantitative, automated FT-IR methods for the determination of moisture, acid number, and base number in lubricating oils, as well as of moisture content in edible oils.

  9. Scatter and veiling glare corrections for quantitative digital subtraction angiography

    NASA Astrophysics Data System (ADS)

    Ersahin, Atila; Molloi, Sabee Y.; Qian, Yao-Jin

    1994-05-01

    In order to quantitate anatomical and physiological parameters such as vessel dimensions and volumetric blood flow, it is necessary to make corrections for scatter and veiling glare (SVG), which are the major sources of nonlinearities in videodensitometric digital subtraction angiography (DSA). A convolution filtering technique has been investigated to estimate SVG distribution in DSA images without the need to sample the SVG for each patient. This technique utilizes exposure parameters and image gray levels to estimate SVG intensity by predicting the total thickness for every pixel in the image. At this point, corrections were also made for variation of SVG fraction with beam energy and field size. To test its ability to estimate SVG intensity, the correction technique was applied to images of a Lucite step phantom, anthropomorphic chest phantom, head phantom, and animal models at different thicknesses, projections, and beam energies. The root-mean-square (rms) percentage error of these estimates were obtained by comparison with direct SVG measurements made behind a lead strip. The average rms percentage errors in the SVG estimate for the 25 phantom studies and for the 17 animal studies were 6.22% and 7.96%, respectively. These results indicate that the SVG intensity can be estimated for a wide range of thicknesses, projections, and beam energies.

  10. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    PubMed

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  11. Practical technique to quantify small, dense low-density lipoprotein cholesterol using dynamic light scattering

    NASA Astrophysics Data System (ADS)

    Trirongjitmoah, Suchin; Iinaga, Kazuya; Sakurai, Toshihiro; Chiba, Hitoshi; Sriyudthsak, Mana; Shimizu, Koichi

    2016-04-01

    Quantification of small, dense low-density lipoprotein (sdLDL) cholesterol is clinically significant. We propose a practical technique to estimate the amount of sdLDL cholesterol using dynamic light scattering (DLS). An analytical solution in a closed form has newly been obtained to estimate the weight fraction of one species of scatterers in the DLS measurement of two species of scatterers. Using this solution, we can quantify the sdLDL cholesterol amount from the amounts of the low-density lipoprotein cholesterol and the high-density lipoprotein (HDL) cholesterol, which are commonly obtained through clinical tests. The accuracy of the proposed technique was confirmed experimentally using latex spheres with known size distributions. The applicability of the proposed technique was examined using samples of human blood serum. The possibility of estimating the sdLDL amount using the HDL data was demonstrated. These results suggest that the quantitative estimation of sdLDL amounts using DLS is feasible for point-of-care testing in clinical practice.

  12. A quantitative analysis of TIMS data obtained on the Learjet 23 at various altitudes

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1992-01-01

    A series of Thermal Infrared Multispectral Scanner (TIMS) data acquisition flights were conducted on the NASA Learjet 23 at different altitudes over a test site. The objective was to monitor the performance of the TIMS (its estimation of the brightness temperatures of the ground scene) with increasing altitude. The results do not show any significant correlation between the brightness temperatures and the altitude. The analysis indicates that the estimation of the temperatures is a function of the accuracy of the atmospheric correction used for each altitude.

  13. Radar QPE for hydrological design: Intensity-Duration-Frequency curves

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2015-04-01

    Intensity-duration-frequency (IDF) curves are widely used in flood risk management since they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. They are estimated analyzing the extreme values of rainfall records, usually basing on raingauge data. This point-based approach raises two issues: first, hydrological design applications generally need IDF information for the entire catchment rather than a point, second, the representativeness of point measurements decreases with the distance from measure location, especially in regions characterized by steep climatological gradients. Weather radar, providing high resolution distributed rainfall estimates over wide areas, has the potential to overcome these issues. Two objections usually restrain this approach: (i) the short length of data records and (ii) the reliability of quantitative precipitation estimation (QPE) of the extremes. This work explores the potential use of weather radar estimates for the identification of IDF curves by means of a long length radar archive and a combined physical- and quantitative- adjustment of radar estimates. Shacham weather radar, located in the eastern Mediterranean area (Tel Aviv, Israel), archives data since 1990 providing rainfall estimates for 23 years over a region characterized by strong climatological gradients. Radar QPE is obtained correcting the effects of pointing errors, ground echoes, beam blockage, attenuation and vertical variations of reflectivity. Quantitative accuracy is then ensured with a range-dependent bias adjustment technique and reliability of radar QPE is assessed by comparison with gauge measurements. IDF curves are derived from the radar data using the annual extremes method and compared with gauge-based curves. Results from 14 study cases will be presented focusing on the effects of record length and QPE accuracy, exploring the potential application of radar IDF curves for ungauged locations and providing insights on the use of radar QPE for hydrological design studies.

  14. A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.

    PubMed

    Rong, Xing; Frey, Eric C

    2013-08-01

    Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.

  15. Influence of entanglements on glass transition temperature of polystyrene

    NASA Astrophysics Data System (ADS)

    Ougizawa, Toshiaki; Kinugasa, Yoshinori

    2013-03-01

    Chain entanglement is essential behavior of polymeric molecules and it seems to affect many physical properties such as not only viscosity of melt state but also glass transition temperature (Tg). But we have not attained the quantitative estimation because the entanglement density is considered as an intrinsic value of the polymer at melt state depending on the chemical structure. Freeze-drying method is known as one of the few ways to make different entanglement density sample from dilute solution. In this study, the influence of entanglements on Tg of polystyrene obtained by the freeze-dried method was estimated quantitatively. The freeze-dried samples showed Tg depression with decreasing the concentration of precursor solution due to the lower entanglement density and their depressed Tg would be saturated when the almost no intermolecular entanglement was formed. The molecular weight dependence of the maximum value of Tg depression was discussed.

  16. Cumulative radiation exposure and associated cancer risk estimates for scoliosis patients: Impact of repetitive full spine radiography.

    PubMed

    Law, Martin; Ma, Wang-Kei; Lau, Damian; Chan, Eva; Yip, Lawrance; Lam, Wendy

    2016-03-01

    To quantitatively evaluate the cumulative effective dose and associated cancer risk for scoliotic patients undergoing repetitive full spine radiography during their diagnosis and follow up periods. Organ absorbed doses of full spine exposed scoliotic patients at different age were computer simulated with the use of PCXMC software. Gender specific effective dose was then calculated with the ICRP-103 approach. Values of lifetime attributable cancer risk for patients exposed at different age were calculated for both patient genders and for Asian and Western population. Mathematical fitting for effective dose and for lifetime attributable cancer risk, as function of exposed age, was analytically obtained to quantitatively estimate patient cumulated effective dose and cancer risk. The cumulative effective dose of full spine radiography with posteroanterior and lateral projection for patients exposed annually at age between 5 and 30 years using digital radiography system was calculated as 15mSv. The corresponding cumulative lifetime attributable cancer risk for Asian and Western population was calculated as 0.08-0.17%. Female scoliotic patients would be at a statistically significant higher cumulated cancer risk than male patients under the same full spine radiography protocol. We demonstrate the use of computer simulation and analytic formula to quantitatively obtain the cumulated effective dose and cancer risk at any age of exposure, both of which are valuable information to medical personnel and patients' parents concern about radiation safety in repetitive full spine radiography. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Stroke onset time estimation from multispectral quantitative magnetic resonance imaging in a rat model of focal permanent cerebral ischemia.

    PubMed

    McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A

    2016-08-01

    Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.

  18. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the corresponding minimum, average, and maximum run-out distances and vulnerability values, thus obtaining a range of risk values per return period. The results indicate that the total annual minimum, average, and maximum losses are about US 44 000, US 136 000 and US 268 000, respectively. The maximum risk to population varies from 2.1 × 10-1 for one or more lives lost to 6.0 × 10-2 yr-1 for 100 or more lives lost. The obtained results will provide a basis for planning risk reduction strategies in the Nilgiri area.

  19. SERS quantitative urine creatinine measurement of human subject

    NASA Astrophysics Data System (ADS)

    Wang, Tsuei Lian; Chiang, Hui-hua K.; Lu, Hui-hsin; Hung, Yung-da

    2005-03-01

    SERS method for biomolecular analysis has several potentials and advantages over traditional biochemical approaches, including less specimen contact, non-destructive to specimen, and multiple components analysis. Urine is an easily available body fluid for monitoring the metabolites and renal function of human body. We developed surface-enhanced Raman scattering (SERS) technique using 50nm size gold colloidal particles for quantitative human urine creatinine measurements. This paper shows that SERS shifts of creatinine (104mg/dl) in artificial urine is from 1400cm-1 to 1500cm-1 which was analyzed for quantitative creatinine measurement. Ten human urine samples were obtained from ten healthy persons and analyzed by the SERS technique. Partial least square cross-validation (PLSCV) method was utilized to obtain the estimated creatinine concentration in clinically relevant (55.9mg/dl to 208mg/dl) concentration range. The root-mean square error of cross validation (RMSECV) is 26.1mg/dl. This research demonstrates the feasibility of using SERS for human subject urine creatinine detection, and establishes the SERS platform technique for bodily fluids measurement.

  20. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    NASA Astrophysics Data System (ADS)

    Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.

  1. Myocardial blood flow estimates from dynamic contrast-enhanced magnetic resonance imaging: three quantitative methods

    NASA Astrophysics Data System (ADS)

    Borrazzo, Cristian; Galea, Nicola; Pacilio, Massimiliano; Altabella, Luisa; Preziosi, Enrico; Carnì, Marco; Ciolina, Federica; Vullo, Francesco; Francone, Marco; Catalano, Carlo; Carbone, Iacopo

    2018-02-01

    Dynamic contrast-enhanced cardiovascular magnetic resonance imaging can be used to quantitatively assess the myocardial blood flow (MBF), recovering the tissue impulse response function for the transit of a gadolinium bolus through the myocardium. Several deconvolution techniques are available, using various models for the impulse response. The method of choice may influence the results, producing differences that have not been deeply investigated yet. Three methods for quantifying myocardial perfusion have been compared: Fermi function modelling (FFM), the Tofts model (TM) and the gamma function model (GF), with the latter traditionally used in brain perfusion MRI. Thirty human subjects were studied at rest as well as under cold pressor test stress (submerging hands in ice-cold water), and a single bolus of gadolinium weighing 0.1  ±  0.05 mmol kg-1 was injected. Perfusion estimate differences between the methods were analysed by paired comparisons with Student’s t-test, linear regression analysis, and Bland-Altman plots, as well as also using the two-way ANOVA, considering the MBF values of all patients grouped according to two categories: calculation method and rest/stress conditions. Perfusion estimates obtained by various methods in both rest and stress conditions were not significantly different, and were in good agreement with the literature. The results obtained during the first-pass transit time (20 s) yielded p-values in the range 0.20-0.28 for Student’s t-test, linear regression analysis slopes between 0.98-1.03, and R values between 0.92-1.01. From the Bland-Altman plots, the paired comparisons yielded a bias (and a 95% CI)—expressed as ml/min/g—for FFM versus TM, -0.01 (-0.20, 0.17) or 0.02 (-0.49, 0.52) at rest or under stress respectively, for FFM versus GF, -0.05 (-0.29, 0.20) or  -0.07 (-0.55, 0.41) at rest or under stress, and for TM versus GF, -0.03 (-0.30, 0.24) or  -0.09 (-0.43, 0.26) at rest or under stress. With the two-way ANOVA, the results were p  =  0.20 for the method effect (not significant), p  <  0.0001 for the rest/stress condition effect (highly significant, as expected), whereas no interaction resulted between the rest/stress condition and method (p  =  0.70, not significant). Considering a wider time-frame (60 s), the estimates for both rest and stress conditions were 25%-30% higher (p in the range 0.016-0.025) than those obtained in the 20 s time-frame. MBF estimates obtained by various methods under rest/stress conditions were not significantly different in the first-pass transit time, encouraging quantitative perfusion estimates in DCE-CMRI with the used methods.

  2. Quantitative estimation of cholinesterase-specific drug metabolism of carbamate inhibitors provided by the analysis of the area under the inhibition-time curve.

    PubMed

    Zhou, Huimin; Xiao, Qiaoling; Tan, Wen; Zhan, Yiyi; Pistolozzi, Marco

    2017-09-10

    Several molecules containing carbamate groups are metabolized by cholinesterases. This metabolism includes a time-dependent catalytic step which temporary inhibits the enzymes. In this paper we demonstrate that the analysis of the area under the inhibition versus time curve (AUIC) can be used to obtain a quantitative estimation of the amount of carbamate metabolized by the enzyme. (R)-bambuterol monocarbamate and plasma butyrylcholinesterase were used as model carbamate-cholinesterase system. The inhibition of different concentrations of the enzyme was monitored for 5h upon incubation with different concentrations of carbamate and the resulting AUICs were analyzed. The amount of carbamate metabolized could be estimated with <15% accuracy (RE%) and ≤23% precision (RSD%). Since the knowledge of the inhibition kinetics is not required for the analysis, this approach could be used to determine the amount of drug metabolized by cholinesterases in a selected compartment in which the cholinesterase is confined (e.g. in vitro solutions, tissues or body fluids), either in vitro or in vivo. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Quantitative microscopy of the lung: a problem-based approach. Part 2: stereological parameters and study designs in various diseases of the respiratory tract.

    PubMed

    Mühlfeld, Christian; Ochs, Matthias

    2013-08-01

    Design-based stereology provides efficient methods to obtain valuable quantitative information of the respiratory tract in various diseases. However, the choice of the most relevant parameters in a specific disease setting has to be deduced from the present pathobiological knowledge. Often it is difficult to express the pathological alterations by interpretable parameters in terms of volume, surface area, length, or number. In the second part of this companion review article, we analyze the present pathophysiological knowledge about acute lung injury, diffuse parenchymal lung diseases, emphysema, pulmonary hypertension, and asthma to come up with recommendations for the disease-specific application of stereological principles for obtaining relevant parameters. Worked examples with illustrative images are used to demonstrate the work flow, estimation procedure, and calculation and to facilitate the practical performance of equivalent analyses.

  4. Pentacam Scheimpflug quantitative imaging of the crystalline lens and intraocular lens.

    PubMed

    Rosales, Patricia; Marcos, Susana

    2009-05-01

    To implement geometrical and optical distortion correction methods for anterior segment Scheimpflug images obtained with a commercially available system (Pentacam, Oculus Optikgeräte GmbH). Ray tracing algorithms were implemented to obtain corrected ocular surface geometry from the original images captured by the Pentacam's CCD camera. As details of the optical layout were not fully provided by the manufacturer, an iterative procedure (based on imaging of calibrated spheres) was developed to estimate the camera lens specifications. The correction procedure was tested on Scheimpflug images of a physical water cell model eye (with polymethylmethacrylate cornea and a commercial IOL of known dimensions) and of a normal human eye previously measured with a corrected optical and geometrical distortion Scheimpflug camera (Topcon SL-45 [Topcon Medical Systems Inc] from the Vrije University, Amsterdam, Holland). Uncorrected Scheimpflug images show flatter surfaces and thinner lenses than in reality. The application of geometrical and optical distortion correction algorithms improves the accuracy of the estimated anterior lens radii of curvature by 30% to 40% and of the estimated posterior lens by 50% to 100%. The average error in the retrieved radii was 0.37 and 0.46 mm for the anterior and posterior lens radii of curvature, respectively, and 0.048 mm for lens thickness. The Pentacam Scheimpflug system can be used to obtain quantitative information on the geometry of the crystalline lens, provided that geometrical and optical distortion correction algorithms are applied, within the accuracy of state-of-the art phakometry and biometry. The techniques could improve with exact knowledge of the technical specifications of the instrument, improved edge detection algorithms, consideration of aspheric and non-rotationally symmetrical surfaces, and introduction of a crystalline gradient index.

  5. Exposure assessment of tetrafluoroethylene and ammonium perfluorooctanoate 1951-2002.

    PubMed

    Sleeuwenhoek, Anne; Cherrie, John W

    2012-03-01

    To develop a method to reconstruct exposure to tetrafluoroethylene (TFE) and ammonium perfluorooctanoate (APFO) in plants producing polytetrafluoroethylene (PTFE) in the absence of suitable objective measurements. These data were used to inform an epidemiological study being carried out to investigate possible risks in workers employed in the manufacture of PTFE and to study trends in exposure over time. For each plant, detailed descriptions of all occupational titles, including tasks and changes over time, were obtained during semi-structured interviews with key plant personnel. A semi-quantitative assessment method was used to assess inhalation exposure to TFE and inhalation plus dermal exposure to APFO. Temporal trends in exposure to TFE and APFO were investigated. In each plant the highest exposures for both TFE and APFO occurred in the polymerisation area. Due to the introduction of control measures, increasing process automation and other improvements, exposures generally decreased over time. In the polymerisation area, the annual decline in exposure to TFE varied by plant from 3.8 to 5.7% and for APFO from 2.2 to 5.5%. A simple method for assessing exposure was developed which used detailed process information and job descriptions to estimate average annual TFE and APFO exposure on an arbitrary semi-quantitative scale. These semi-quantitative estimates are sufficient to identify relative differences in exposure for the epidemiological study and should good data become available, they could be used to provide quantitative estimates for all plants across the whole period of operation. This journal is © The Royal Society of Chemistry 2012

  6. Properties of young massive clusters obtained with different massive-star evolutionary models

    NASA Astrophysics Data System (ADS)

    Wofford, Aida; Charlot, Stéphane

    We undertake a comprehensive comparative test of seven widely-used spectral synthesis models using multi-band HST photometry of a sample of eight YMCs in two galaxies. We provide a first quantitative estimate of the accuracies and uncertainties of new models, show the good progress of models in fitting high-quality observations, and highlight the need of further comprehensive comparative tests.

  7. Power Analysis of Artificial Selection Experiments Using Efficient Whole Genome Simulation of Quantitative Traits

    PubMed Central

    Kessner, Darren; Novembre, John

    2015-01-01

    Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50–100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates. PMID:25672748

  8. Characterization of Dynamics in Complex Lyophilized Formulations: I. Comparison of Relaxation Times Measured by Isothermal Calorimetry with Data Estimated from the Width of the Glass Transition Temperature Region

    PubMed Central

    Chieng, Norman; Mizuno, Masayasu; Pikal, Michael

    2013-01-01

    The purposes of this study are to characterize the relaxation dynamics in complex freeze dried formulations and to investigate the quantitative relationship between the structural relaxation time as measured by thermal activity monitor (TAM) and that estimated from the width of the glass transition temperature (ΔTg). The latter method has advantages over TAM because it is simple and quick. As part of this objective, we evaluate the accuracy in estimating relaxation time data at higher temperatures (50°C and 60°C) from TAM data at lower temperature (40°C) and glass transition region width (ΔTg) data obtained by differential scanning calorimetry. Formulations studied here were hydroxyethyl starch (HES)-disaccharide, HES-polyol and HES-disaccharide-polyol at various ratios. We also re-examine, using TAM derived relaxation times, the correlation between protein stability (human growth hormone, hGH) and relaxation times explored in a previous report, which employed relaxation time data obtained from ΔTg. Results show that most of the freeze dried formulations exist in single amorphous phase, and structural relaxation times were successfully measured for these systems. We find a reasonably good correlation between TAM measured relaxation times and corresponding data obtained from estimates based on ΔTg, but the agreement is only qualitative. The comparison plot showed that TAM data is directly proportional to the 1/3 power of ΔTg data, after correcting for an offset. Nevertheless, the correlation between hGH stability and relaxation time remained qualitatively the same as found with using ΔTg derived relaxation data, and it was found that the modest extrapolation of TAM data to higher temperatures using ΔTg method and TAM data at 40°C resulted in quantitative agreement with TAM measurements made at 50 °C and 60 °C, provided the TAM experiment temperature is well below the Tg of the sample. PMID:23608636

  9. Estimation of Tegaserod Maleate by Differential Pulse Polarography

    PubMed Central

    Rajput, S. J.; Raj, H. A.

    2009-01-01

    A highly sensitive differential pulse polarographic method has been developed for the estimation of tegaserod maleate after treating it with hydrogen peroxide solution. The oxidation of tegaserod maleate is a reversible process as the oxidized product could be reduced at hanging mercury drop electrode in a quantitative manner using differential pulse polarography mode. The limit of quantification was 0.1ng/ml. The voltametric peak was obtained at -1.05 volts in presence of 0.1M potassium chloride as supporting electrolyte. The technique could be used successfully to analyze tegaserod maleate in its tablet formulation. PMID:20177456

  10. Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y

    Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less

  11. The linearized multistage model and the future of quantitative risk assessment.

    PubMed

    Crump, K S

    1996-10-01

    The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.

  12. Estimation of trace amounts of benzene in solvent-extracted vegetable oils and oil seed cakes.

    PubMed

    Masohan, A; Parsad, G; Khanna, M K; Chopra, S K; Rawat, B S; Garg, M O

    2000-09-01

    A new method is presented for the qualitative and quantitative estimation of trace amounts (up to 0.15 ppm) of benzene in crude as well as refined vegetable oils obtained by extraction with food grade hexane (FGH), and in the oil seed cakes left after extraction. The method involves the selection of two solvents; cyclohexanol, for thinning of viscous vegetable oil, and heptane, for azeotroping out trace benzene as a concentrate from the resulting mixture. Benzene is then estimated in the resulting azeotrope either by UV spectroscopy or by GC-MS subject to availability and cost effectiveness of the latter. Repeatability and reproducibility of the method is within 1-3% error. This method is suitable for estimating benzene in vegetable oils and oil seed cakes.

  13. Electrostatic Estimation of Intercalant Jump-Diffusion Barriers Using Finite-Size Ion Models.

    PubMed

    Zimmermann, Nils E R; Hannah, Daniel C; Rong, Ziqin; Liu, Miao; Ceder, Gerbrand; Haranczyk, Maciej; Persson, Kristin A

    2018-02-01

    We report on a scheme for estimating intercalant jump-diffusion barriers that are typically obtained from demanding density functional theory-nudged elastic band calculations. The key idea is to relax a chain of states in the field of the electrostatic potential that is averaged over a spherical volume using different finite-size ion models. For magnesium migrating in typical intercalation materials such as transition-metal oxides, we find that the optimal model is a relatively large shell. This data-driven result parallels typical assumptions made in models based on Onsager's reaction field theory to quantitatively estimate electrostatic solvent effects. Because of its efficiency, our potential of electrostatics-finite ion size (PfEFIS) barrier estimation scheme will enable rapid identification of materials with good ionic mobility.

  14. Robust Tracking of Small Displacements with a Bayesian Estimator

    PubMed Central

    Dumont, Douglas M.; Byram, Brett C.

    2016-01-01

    Radiation-force-based elasticity imaging describes a group of techniques that use acoustic radiation force (ARF) to displace tissue in order to obtain qualitative or quantitative measurements of tissue properties. Because ARF-induced displacements are on the order of micrometers, tracking these displacements in vivo can be challenging. Previously, it has been shown that Bayesian-based estimation can overcome some of the limitations of a traditional displacement estimator like normalized cross-correlation (NCC). In this work, we describe a Bayesian framework that combines a generalized Gaussian-Markov random field (GGMRF) prior with an automated method for selecting the prior’s width. We then evaluate its performance in the context of tracking the micrometer-order displacements encountered in an ARF-based method like acoustic radiation force impulse (ARFI) imaging. The results show that bias, variance, and mean-square error performance vary with prior shape and width, and that an almost one order-of-magnitude reduction in mean-square error can be achieved by the estimator at the automatically-selected prior width. Lesion simulations show that the proposed estimator has a higher contrast-to-noise ratio but lower contrast than NCC, median-filtered NCC, and the previous Bayesian estimator, with a non-Gaussian prior shape having better lesion-edge resolution than a Gaussian prior. In vivo results from a cardiac, radiofrequency ablation ARFI imaging dataset show quantitative improvements in lesion contrast-to-noise ratio over NCC as well as the previous Bayesian estimator. PMID:26529761

  15. A quantitative approach to assessing the efficacy of occupant protection programs: A case study from Montana.

    PubMed

    Manlove, Kezia; Stanley, Laura; Peck, Alyssa

    2015-10-01

    Quantitative evaluation of vehicle occupant protection programs is critical for ensuring efficient government resource allocation, but few methods exist for conducting evaluation across multiple programs simultaneously. Here we present an analysis of occupant protection efficacy in the state of Montana. This approach relies on seat belt compliance rates as measured by the National Occupant Protection Usage Survey (NOPUS). A hierarchical logistic regression model is used to estimate the impacts of four Montana Department of Transportation (MDT)-funded occupant protection programs used in the state of Montana, following adjustment for a suite of potential confounders. Activity from two programs, Buckle Up coalitions and media campaigns, are associated with increased seat belt use in Montana, whereas the impact of another program, Selective Traffic Enforcement, is potentially masked by other program activity. A final program, Driver's Education, is not associated with any shift in seat belt use. This method allows for a preliminary quantitative estimation of program impacts without requiring states to obtain any new seat belt use data. This approach provides states a preliminary look at program impacts, and a means for carefully planning future program allocation and investigation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. A mixed model for the relationship between climate and human cranial form.

    PubMed

    Katz, David C; Grote, Mark N; Weaver, Timothy D

    2016-08-01

    We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  17. Qualitative human body composition analysis assessed with bioelectrical impedance.

    PubMed

    Talluri, T

    1998-12-01

    Body composition is generally aiming at quantitative estimates of fat mass, inadequate to assess nutritional states that on the other hand are well defined by the intra/extra cellular masses proportion (ECM/BCM). Direct measures performed with phase sensitive bioelectrical impedance analyzers can be used to define the current distribution in normal and abnormal populations. Phase angle and reactance nomogram is directly reflecting the ECM/BCM pathways proportions and body impedance analysis (BIA) is also validated to estimate the individual content of body cell mass (BCM). A new body cell mass index (BCMI) obtained dividing the weight of BCM in kilograms by the body surface in square meters is confronted to the scatterplot distribution of phase angle and reactance values obtained from controls and patients, and proposed as a qualitative approach to identify abnormal ECM/BCM ratios and nutritional states.

  18. Determination of the mechanical parameters of rock mass based on a GSI system and displacement back analysis

    NASA Astrophysics Data System (ADS)

    Kang, Kwang-Song; Hu, Nai-Lian; Sin, Chung-Sik; Rim, Song-Ho; Han, Eun-Cheol; Kim, Chol-Nam

    2017-08-01

    It is very important to obtain the mechanical paramerters of rock mass for excavation design, support design, slope design and stability analysis of the underground structure. In order to estimate the mechanical parameters of rock mass exactly, a new method of combining a geological strength index (GSI) system with intelligent displacment back analysis is proposed in this paper. Firstly, average spacing of joints (d) and rock mass block rating (RBR, a new quantitative factor), surface condition rating (SCR) and joint condition factor (J c) are obtained on in situ rock masses using the scanline method, and the GSI values of rock masses are obtained from a new quantitative GSI chart. A correction method of GSI value is newly introduced by considering the influence of joint orientation and groundwater on rock mass mechanical properties, and then value ranges of rock mass mechanical parameters are chosen by the Hoek-Brown failure criterion. Secondly, on the basis of the measurement result of vault settlements and horizontal convergence displacements of an in situ tunnel, optimal parameters are estimated by combination of genetic algorithm (GA) and numerical simulation analysis using FLAC3D. This method has been applied in a lead-zinc mine. By utilizing the improved GSI quantization, correction method and displacement back analysis, the mechanical parameters of the ore body, hanging wall and footwall rock mass were determined, so that reliable foundations were provided for mining design and stability analysis.

  19. Reliability based fatigue design and maintenance procedures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1977-01-01

    A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.

  20. Metallurgical features of the formation of a solid-phase metal joint upon electric-circuit heating

    NASA Astrophysics Data System (ADS)

    Latypov, R. A.; Bulychev, V. V.; Zybin, I. N.

    2017-06-01

    The thermodynamic conditions of formation of a joint between metals using the solid-phase methods of powder metallurgy, welding, and deposition of functional coatings upon electric-current heating of the surfaces to be joined are studied. Relations are obtained to quantitatively estimate the critical sizes of the circular and linear active centers that result in the formation of stable bonding zones.

  1. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    PubMed

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  2. A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex.

    PubMed

    Hellwig, B

    2000-02-01

    This study provides a detailed quantitative estimate for local synaptic connectivity between neocortical pyramidal neurons. A new way of obtaining such an estimate is presented. In acute slices of the rat visual cortex, four layer 2 and four layer 3 pyramidal neurons were intracellularly injected with biocytin. Axonal and dendritic arborizations were three-dimensionally reconstructed with the aid of a computer-based camera lucida system. In a computer experiment, pairs of pre- and postsynaptic neurons were formed and potential synaptic contacts were calculated. For each pair, the calculations were carried out for a whole range of distances (0 to 500 microm) between the presynaptic and the postsynaptic neuron, in order to estimate cortical connectivity as a function of the spatial separation of neurons. It was also differentiated whether neurons were situated in the same or in different cortical layers. The data thus obtained was used to compute connection probabilities, the average number of contacts between neurons, the frequency of specific numbers of contacts and the total number of contacts a dendritic tree receives from the surrounding cortical volume. Connection probabilities ranged from 50% to 80% for directly adjacent neurons and from 0% to 15% for neurons 500 microm apart. In many cases, connections were mediated by one contact only. However, close neighbors made on average up to 3 contacts with each other. The question as to whether the method employed in this study yields a realistic estimate of synaptic connectivity is discussed. It is argued that the results can be used as a detailed blueprint for building artificial neural networks with a cortex-like architecture.

  3. Optical scattering coefficient estimated by optical coherence tomography correlates with collagen content in ovarian tissue

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Wang, Tianheng; Biswal, Nrusingh C.; Wang, Xiaohong; Sanders, Melinda; Brewer, Molly; Zhu, Quing

    2011-09-01

    Optical scattering coefficient from ex vivo unfixed normal and malignant ovarian tissue was quantitatively extracted by fitting optical coherence tomography (OCT) A-line signals to a single scattering model. 1097 average A-line measurements at a wavelength of 1310 nm were performed at 108 sites obtained from 18 ovaries. The average scattering coefficient obtained from the normal tissue group consisted of 833 measurements from 88 sites was 2.41 mm-1 (+/-0.59), while the average coefficient obtained from the malignant tissue group consisted of 264 measurements from 20 sites was 1.55 mm-1 (+/-0.46). The malignant ovarian tissue showed significant lower scattering than the normal group (p < 0.001). The amount of collagen within OCT imaging depth was analyzed from the tissue histological section stained with Sirius Red. The average collagen area fraction (CAF) obtained from the normal tissue group was 48.4% (+/-12.3%), while the average CAF obtained from the malignant tissue group was 11.4% (+/-4.7%). A statistical significance of the collagen content was found between the two groups (p < 0.001). These results demonstrated that quantitative measurements of optical scattering coefficient from OCT images could be a potential powerful method for ovarian cancer detection.

  4. Lightning charge moment changes estimated by high speed photometric observations from ISS

    NASA Astrophysics Data System (ADS)

    Hobara, Y.; Kono, S.; Suzuki, K.; Sato, M.; Takahashi, Y.; Adachi, T.; Ushio, T.; Suzuki, M.

    2017-12-01

    Optical observations by the CCD camera using the orbiting satellite is generally used to derive the spatio-temporal global distributions of the CGs and ICs. However electrical properties of the lightning such as peak current and lightning charge are difficult to obtain from the space. In particular, CGs with considerably large lightning charge moment changes (CMC) and peak currents are crucial parameters to generate red sprites and elves, respectively, and so it must be useful to obtain these parameters from space. In this paper, we obtained the lightning optical signatures by using high speed photometric observations from the International Space Station GLIMS (Global Lightning and Sprit MeasurementS JEM-EF) mission. These optical signatures were compared quantitatively with radio signatures recognized as truth values derived from ELF electromagnetic wave observations on the ground to verify the accuracy of the optically derived values. High correlation (R > 0.9) was obtained between lightning optical irradiance and current moment, and quantitative relational expression between these two parameters was derived. Rather high correlation (R > 0.7) was also obtained between the integrated irradiance and the lightning CMC. Our results indicate the possibility to derive lightning electrical properties (current moment and CMC) from optical measurement from space. Moreover, we hope that these results will also contribute to forthcoming French microsatellite mission TARANIS.

  5. Computational design of a Diels-Alderase from a thermophilic esterase: the importance of dynamics

    NASA Astrophysics Data System (ADS)

    Linder, Mats; Johansson, Adam Johannes; Olsson, Tjelvar S. G.; Liebeschuetz, John; Brinck, Tore

    2012-09-01

    A novel computational Diels-Alderase design, based on a relatively rare form of carboxylesterase from Geobacillus stearothermophilus, is presented and theoretically evaluated. The structure was found by mining the PDB for a suitable oxyanion hole-containing structure, followed by a combinatorial approach to find suitable substrates and rational mutations. Four lead designs were selected and thoroughly modeled to obtain realistic estimates of substrate binding and prearrangement. Molecular dynamics simulations and DFT calculations were used to optimize and estimate binding affinity and activation energies. A large quantum chemical model was used to capture the salient interactions in the crucial transition state (TS). Our quantitative estimation of kinetic parameters was validated against four experimentally characterized Diels-Alderases with good results. The final designs in this work are predicted to have rate enhancements of ≈103-106 and high predicted proficiencies. This work emphasizes the importance of considering protein dynamics in the design approach, and provides a quantitative estimate of the how the TS stabilization observed in most de novo and redesigned enzymes is decreased compared to a minimal, `ideal' model. The presented design is highly interesting for further optimization and applications since it is based on a thermophilic enzyme ( T opt = 70 °C).

  6. Fully Automated Quantitative Estimation of Volumetric Breast Density from Digital Breast Tomosynthesis Images: Preliminary Results and Comparison with Digital Mammography and MR Imaging.

    PubMed

    Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina

    2016-04-01

    To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.

  7. Tmax Determined Using a Bayesian Estimation Deconvolution Algorithm Applied to Bolus Tracking Perfusion Imaging: A Digital Phantom Validation Study.

    PubMed

    Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio

    2017-01-10

    The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.

  8. Partitioning of organophosphorus pesticides into phosphatidylcholine small unilamellar vesicles studied by second-derivative spectrophotometry

    NASA Astrophysics Data System (ADS)

    Takegami, Shigehiko; Kitamura, Keisuke; Ohsugi, Mayuko; Ito, Aya; Kitade, Tatsuya

    2015-06-01

    In order to quantitatively examine the lipophilicity of the widely used organophosphorus pesticides (OPs) chlorfenvinphos (CFVP), chlorpyrifos-methyl (CPFM), diazinon (DZN), fenitrothion (FNT), fenthion (FT), isofenphos (IFP), profenofos (PFF) and pyraclofos (PCF), their partition coefficient (Kp) values between phosphatidylcholine (PC) small unilamellar vesicles (SUVs) and water (liposome-water system) were determined by second-derivative spectrophotometry. The second-derivative spectra of these OPs in the presence of PC SUV showed a bathochromic shift according to the increase in PC concentration and distinct derivative isosbestic points, demonstrating the complete elimination of the residual background signal effects that were observed in the absorption spectra. The Kp values were calculated from the second-derivative intensity change induced by addition of PC SUV and obtained with a good precision of R.S.D. below 10%. The Kp values were in the order of CPFM > FT > PFF > PCF > IFP > CFVP > FNT ⩾ DZN and did not show a linear correlation relationship with the reported partition coefficients obtained using an n-octanol-water system (R2 = 0.530). Also, the results quantitatively clarified the effect of chemical-group substitution in OPs on their lipophilicity. Since the partition coefficient for the liposome-water system is more effective for modeling the quantitative structure-activity relationship than that for the n-octanol-water system, the obtained results are toxicologically important for estimating the accumulation of these OPs in human cell membranes.

  9. Racial Differences in Quantitative Measures of Area and Volumetric Breast Density

    PubMed Central

    McCarthy, Anne Marie; Keller, Brad M.; Pantalone, Lauren M.; Hsieh, Meng-Kang; Synnestvedt, Marie; Conant, Emily F.; Armstrong, Katrina; Kontos, Despina

    2016-01-01

    Abstract Background: Increased breast density is a strong risk factor for breast cancer and also decreases the sensitivity of mammographic screening. The purpose of our study was to compare breast density for black and white women using quantitative measures. Methods: Breast density was assessed among 5282 black and 4216 white women screened using digital mammography. Breast Imaging-Reporting and Data System (BI-RADS) density was obtained from radiologists’ reports. Quantitative measures for dense area, area percent density (PD), dense volume, and volume percent density were estimated using validated, automated software. Breast density was categorized as dense or nondense based on BI-RADS categories or based on values above and below the median for quantitative measures. Logistic regression was used to estimate the odds of having dense breasts by race, adjusted for age, body mass index (BMI), age at menarche, menopause status, family history of breast or ovarian cancer, parity and age at first birth, and current hormone replacement therapy (HRT) use. All statistical tests were two-sided. Results: There was a statistically significant interaction of race and BMI on breast density. After accounting for age, BMI, and breast cancer risk factors, black women had statistically significantly greater odds of high breast density across all quantitative measures (eg, PD nonobese odds ratio [OR] = 1.18, 95% confidence interval [CI] = 1.02 to 1.37, P = .03, PD obese OR = 1.26, 95% CI = 1.04 to 1.53, P = .02). There was no statistically significant difference in BI-RADS density by race. Conclusions: After accounting for age, BMI, and other risk factors, black women had higher breast density than white women across all quantitative measures previously associated with breast cancer risk. These results may have implications for risk assessment and screening. PMID:27130893

  10. Quantitative PCR for HTLV-1 provirus in adult T-cell leukemia/lymphoma using paraffin tumor sections.

    PubMed

    Kato, Junki; Masaki, Ayako; Fujii, Keiichiro; Takino, Hisashi; Murase, Takayuki; Yonekura, Kentaro; Utsunomiya, Atae; Ishida, Takashi; Iida, Shinsuke; Inagaki, Hiroshi

    2016-11-01

    Detection of HTLV-1 provirus using paraffin tumor sections may assist the diagnosis of adult T-cell leukemia/lymphoma (ATLL). For the detection, non-quantitative PCR assay has been reported, but its usefulness and limitations remain unclear. To our knowledge, quantitative PCR assay using paraffin tumor sections has not been reported. Using paraffin sections from ATLLs and non-ATLL T-cell lymphomas, we first performed non-quantitative PCR for HTLV-1 provirus. Next, we determined tumor ratios and carried out quantitative PCR to obtain provirus copy numbers. The results were analyzed with a simple regression model and a novel criterion, cut-off using 95 % rejection limits. Our quantitative PCR assay showed an excellent association between tumor ratios and the copy numbers (r = 0.89, P < 0.0001). The 95 % rejection limits provided a statistical basis for the range for the determination of HTLV-1 involvement. Its application suggested that results of non-quantitative PCR assay should be interpreted very carefully and that our quantitative PCR assay is useful to estimate the status of HTLV-1 involvement in the tumor cases. In conclusion, our quantitative PCR assay using paraffin tumor sections may be useful for the screening of ATLL cases, especially in HTLV-1 non-endemic areas where easy access to serological testing for HTLV-1 infection is limited. © 2016 Japanese Society of Pathology and John Wiley & Sons Australia, Ltd.

  11. Absolute activity quantitation from projections using an analytical approach: comparison with iterative methods in Tc-99m and I-123 brain SPECT

    NASA Astrophysics Data System (ADS)

    Fakhri, G. El; Kijewski, M. F.; Moore, S. C.

    2001-06-01

    Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.

  12. Quantitative aspects of radon daughter exposure and lung cancer in underground miners.

    PubMed Central

    Edling, C; Axelson, O

    1983-01-01

    Epidemiological studies have shown an excessive incidence of lung cancer in miners with exposure to radon daughters. The various risk estimates have ranged from six to 47 excess cases per 10(6) person years and working level month, but the effect of smoking has not been fully evaluated. The present study, among a group of iron ore miners, is an attempt to obtain quantitative information about the risk of lung cancer due to radon and its daughters among smoking and non-smoking miners. The results show a considerable risk for miners to develop lung cancer; even non-smoking miners seem to be at a rather high risk. An additive effect of smoking and exposure to radon daughters is indicated and an estimate of about 30-40 excess cases per 10(6) person years and working level month seems to apply on a life time basis to both smoking and non-smoking miners aged over 50. PMID:6830715

  13. Project risk management in the construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya

    2018-03-01

    This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.

  14. A pre-edge analysis of Mn K-edge XANES spectra to help determine the speciation of manganese in minerals and glasses

    NASA Astrophysics Data System (ADS)

    Chalmin, E.; Farges, F.; Brown, G. E.

    2009-01-01

    High-resolution manganese K-edge X-ray absorption near edge structure spectra were collected on a set of 40 Mn-bearing minerals. The pre-edge feature information (position, area) was investigated to extract as much as possible quantitative valence and symmetry information for manganese in various “test” and “unknown” minerals and glasses. The samples present a range of manganese symmetry environments (tetrahedral, square planar, octahedral, and cubic) and valences (II to VII). The extraction of the pre-edge information is based on a previous multiple scattering and multiplet calculations for model compounds. Using the method described in this study, a robust estimation of the manganese valence could be obtained from the pre-edge region at 5% accuracy level. This method applied to 20 “test” compounds (such as hausmannite and rancieite) and to 15 “unknown” compounds (such as axinite and birnessite) provides a quantitative estimate of the average valence of manganese in complex minerals and silicate glasses.

  15. FDG-PET Response Prediction in Pediatric Hodgkin's Lymphoma: Impact of Metabolically Defined Tumor Volumes and Individualized SUV Measurements on the Positive Predictive Value.

    PubMed

    Hussien, Amr Elsayed M; Furth, Christian; Schönberger, Stefan; Hundsdoerfer, Patrick; Steffen, Ingo G; Amthauer, Holger; Müller, Hans-Wilhelm; Hautzel, Hubertus

    2015-01-28

    In pediatric Hodgkin's lymphoma (pHL) early response-to-therapy prediction is metabolically assessed by (18)F-FDG PET carrying an excellent negative predictive value (NPV) but an impaired positive predictive value (PPV). Aim of this study was to improve the PPV while keeping the optimal NPV. A comparison of different PET data analyses was performed applying individualized standardized uptake values (SUV), PET-derived metabolic tumor volume (MTV) and the product of both parameters, termed total lesion glycolysis (TLG); One-hundred-eight PET datasets (PET1, n = 54; PET2, n = 54) of 54 children were analysed by visual and semi-quantitative means. SUVmax, SUVmean, MTV and TLG were obtained the results of both PETs and the relative change from PET1 to PET2 (Δ in %) were compared for their capability of identifying responders and non-responders using receiver operating characteristics (ROC)-curves. In consideration of individual variations in noise and contrasts levels all parameters were additionally obtained after threshold correction to lean body mass and background; All semi-quantitative SUV estimates obtained at PET2 were significantly superior to the visual PET2 analysis. However, ΔSUVmax revealed the best results (area under the curve, 0.92; p < 0.001; sensitivity 100%; specificity 85.4%; PPV 46.2%; NPV 100%; accuracy, 87.0%) but was not significantly superior to SUVmax-estimation at PET2 and ΔTLGmax. Likewise, the lean body mass and background individualization of the datasets did not impove the results of the ROC analyses; Sophisticated semi-quantitative PET measures in early response assessment of pHL patients do not perform significantly better than the previously proposed ΔSUVmax. All analytical strategies failed to improve the impaired PPV to a clinically acceptable level while preserving the excellent NPV.

  16. Frameshifted prion proteins as pathological agents: quantitative considerations.

    PubMed

    Wills, Peter R

    2013-05-21

    A quantitatively consistent explanation for the titres of infectivity found in a variety of prion-containing preparations is provided on the basis that the ætiological agents of transmissible spongiform encephalopathy comprise a very small population fraction of prion protein (PrP) variants, which contain frameshifted elements in their N-terminal octapeptide-repeat regions. A mechanism for the replication of frameshifted prions is described and calculations are performed to obtain estimates of the concentration of these PrP variants in normal and infected brain, as well as their enrichment in products of protein misfolding cyclic amplification. These calculations resolve the lack of proper quantitative correlation between measures of infectivity and the presence of conformationally-altered, protease-resistant variants of PrP. Experiments, which could confirm or eventually exclude the role of frameshifted variants in the ætiology of prion disease, are suggested. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Analytical scanning evanescent microwave microscope and control stage

    DOEpatents

    Xiang, Xiao-Dong; Gao, Chen; Duewer, Fred; Yang, Hai Tao; Lu, Yalin

    2013-01-22

    A scanning evanescent microwave microscope (SEMM) that uses near-field evanescent electromagnetic waves to probe sample properties is disclosed. The SEMM is capable of high resolution imaging and quantitative measurements of the electrical properties of the sample. The SEMM has the ability to map dielectric constant, loss tangent, conductivity, electrical impedance, and other electrical parameters of materials. Such properties are then used to provide distance control over a wide range, from to microns to nanometers, over dielectric and conductive samples for a scanned evanescent microwave probe, which enable quantitative non-contact and submicron spatial resolution topographic and electrical impedance profiling of dielectric, nonlinear dielectric and conductive materials. The invention also allows quantitative estimation of microwave impedance using signals obtained by the scanned evanescent microwave probe and quasistatic approximation modeling. The SEMM can be used to measure electrical properties of both dielectric and electrically conducting materials.

  18. Analytical scanning evanescent microwave microscope and control stage

    DOEpatents

    Xiang, Xiao-Dong; Gao, Chen; Duewer, Fred; Yang, Hai Tao; Lu, Yalin

    2009-06-23

    A scanning evanescent microwave microscope (SEMM) that uses near-field evanescent electromagnetic waves to probe sample properties is disclosed. The SEMM is capable of high resolution imaging and quantitative measurements of the electrical properties of the sample. The SEMM has the ability to map dielectric constant, loss tangent, conductivity, electrical impedance, and other electrical parameters of materials. Such properties are then used to provide distance control over a wide range, from to microns to nanometers, over dielectric and conductive samples for a scanned evanescent microwave probe, which enable quantitative non-contact and submicron spatial resolution topographic and electrical impedance profiling of dielectric, nonlinear dielectric and conductive materials. The invention also allows quantitative estimation of microwave impedance using signals obtained by the scanned evanescent microwave probe and quasistatic approximation modeling. The SEMM can be used to measure electrical properties of both dielectric and electrically conducting materials.

  19. Quantitative analysis of aircraft multispectral-scanner data and mapping of water-quality parameters in the James River in Virginia

    NASA Technical Reports Server (NTRS)

    Johnson, R. W.; Bahn, G. S.

    1977-01-01

    Statistical analysis techniques were applied to develop quantitative relationships between in situ river measurements and the remotely sensed data that were obtained over the James River in Virginia on 28 May 1974. The remotely sensed data were collected with a multispectral scanner and with photographs taken from an aircraft platform. Concentration differences among water quality parameters such as suspended sediment, chlorophyll a, and nutrients indicated significant spectral variations. Calibrated equations from the multiple regression analysis were used to develop maps that indicated the quantitative distributions of water quality parameters and the dispersion characteristics of a pollutant plume entering the turbid river system. Results from further analyses that use only three preselected multispectral scanner bands of data indicated that regression coefficients and standard errors of estimate were not appreciably degraded compared with results from the 10-band analysis.

  20. Quantitative crystalline silica exposure assessment for a historical cohort epidemiologic study in the German porcelain industry.

    PubMed

    Birk, Thomas; Guldner, Karlheinz; Mundt, Kenneth A; Dahmann, Dirk; Adams, Robert C; Parsons, William

    2010-09-01

    A time-dependent quantitative exposure assessment of silica exposure among nearly 18,000 German porcelain workers was conducted. Results will be used to evaluate exposure-response disease risks. Over 8000 historical industrial hygiene (IH) measurements with original sampling and analysis protocols from 1954-2006 were obtained from the German Berufs- genossenschaft der keramischen-und Glas-Industrie (BGGK) and used to construct a job exposure matrix (JEM). Early measurements from different devices were converted to modern gravimetric equivalent values. Conversion factors were derived from parallel historical measurements and new side-by-side measurements using historical and modern devices in laboratory dust tunnels and active workplace locations. Exposure values were summarized and smoothed using LOESS regression; estimates for early years were derived using backward extrapolation techniques. Employee work histories were merged with JEM values to determine cumulative crystalline silica exposures for cohort members. Average silica concentrations were derived for six primary similar exposure groups (SEGs) for 1938-2006. Over 40% of the cohort accumulated <0.5 mg; just over one-third accumulated >1 mg/m(3)-years. Nearly 5000 workers had cumulative crystalline silica estimates >1.5 mg/m(3)-years. Similar numbers of men and women fell into each cumulative exposure category, except for 1113 women and 1567 men in the highest category. Over half of those hired before 1960 accumulated >3 mg/m(3)-years crystalline silica compared with 4.9% of those hired after 1960. Among those ever working in the materials preparation area, half accumulated >3 mg/m(3)-year compared with 12% of those never working in this area. Quantitative respirable silica exposures were estimated for each member of this cohort, including employment periods for which sampling used now obsolete technologies. Although individual cumulative exposure estimates ranged from background to about 40 mg/m(3)-years, many of these estimates reflect long-term exposures near modern exposure limit values, allowing direct evaluation of lung cancer and silicosis risks near these limits without extrapolation. This quantitative exposure assessment is the largest to date in the porcelain industry.

  1. Shear-induced aggregation dynamics in a polymer microrod suspension

    NASA Astrophysics Data System (ADS)

    Kumar, Pramukta S.

    A non-Brownian suspension of micron scale rods is found to exhibit reversible shear-driven formation of disordered aggregates resulting in dramatic viscosity enhancement at low shear rates. Aggregate formation is imaged at low magnification using a combined rheometer and fluorescence microscope system. The size and structure of these aggregates are found to depend on shear rate and concentration, with larger aggregates present at lower shear rates and higher concentrations. Quantitative measurements of the early-stage aggregation process are modeled by a collision driven growth of porous structures which show that the aggregate density increases with a shear rate. A Krieger-Dougherty type constitutive relation and steady-state viscosity measurements are used to estimate the intrinsic viscosity of complex structures developed under shear. Higher magnification images are collected and used to validate the aggregate size versus density relationship, as well as to obtain particle flow fields via PIV. The flow fields provide a tantalizing view of fluctuations involved in the aggregation process. Interaction strength is estimated via contact force measurements and JKR theory and found to be extremely strong in comparison to shear forces present in the system, estimated using hydrodynamic arguments. All of the results are then combined to produce a consistent conceptual model of aggregation in the system that features testable consequences. These results represent a direct, quantitative, experimental study of aggregation and viscosity enhancement in rod suspension, and demonstrate a strategy for inferring inaccessible microscopic geometric properties of a dynamic system through the combination of quantitative imaging and rheology.

  2. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.

  3. Usefulness of the automatic quantitative estimation tool for cerebral blood flow: clinical assessment of the application software tool AQCEL.

    PubMed

    Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi

    2011-01-01

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation coefficient between AQCEL and conventional methods were 0.973 and 0.986 for the normal and affected sides at rest, respectively, and 0.977 and 0.984 for the normal and affected sides after ACZ loading, respectively. The quality of images reconstructed using the application software AQCEL were superior to that obtained using conventional method after ACZ loading, and high correlations were shown in quantity at rest and after ACZ loading. This software can be applied to clinical practice and is a useful tool for improvement of reproducibility and throughput.

  4. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    PubMed

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results were obtained with normalization by both the computer estimated and pathologist estimated epithelium percentage. Our results show that estimation of tissue epithelium percentage using our color-based segmentation method correlates well with pathologists' estimation of tissue epithelium percentages. The epithelium contents estimated by color-based segmentation may be useful in immuno-based analysis or clinical proteomic analysis of tumor proteins. The codes used for epithelium estimation as well as the micrographs with estimated epithelium content are available online.

  5. Systematic feasibility analysis of a quantitative elasticity estimation for breast anatomy using supine/prone patient postures.

    PubMed

    Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P

    2016-03-01

    Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.

  6. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    PubMed

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. FACTORS AFFECTING THE UPTAKE OF LISSAMINE GREEN BY SERUM PROTEINS

    PubMed Central

    Brackenridge, C. J.

    1960-01-01

    Eight physicochemical factors which affect the uptake of lissamine green on filter paper impregnated with serum proteins have been examined, and their relevance to the staining of electrophoretically separated protein fractions is discussed. It is shown that grade of paper, weight of protein applied, separate and combined denaturation and staining time, temperature and concentration of staining solution, concentration of denaturant, and type of protein all influence the weight of dye absorbed per unit weight of applied protein, and must be rigidly standardized if valid quantitative results are to be obtained. Five sets of conditions are obtained for optimal staining and it is found that separation of denaturant from dye yields the best procedure. It is concluded that lissamine green is an excellent dye for the staining and quantitative estimation of separated protein fractions in paper electrophoresis, and that conditions can usually be arranged to produce a linear relation between dye uptake and protein concentration in an experimentally efficient manner. PMID:13803681

  8. Quantitative analysis of Bordeaux red wine precipitates by solid-state NMR: Role of tartrates and polyphenols.

    PubMed

    Prakash, Shipra; Iturmendi, Nerea; Grelard, Axelle; Moine, Virginie; Dufourc, Erick

    2016-05-15

    Stability of wines is of great importance in oenology matters. Quantitative estimation of dark red precipitates formed in Merlot and Cabernet Sauvignon wine from Bordeaux region for vintages 2012 and 2013 was performed during the oak barrel ageing process. Precipitates were obtained by placing wine at -4°C or 4°C for 2-6 days and monitored by periodic sampling during a one-year period. Spectroscopic identification of the main families of components present in the precipitate powder was performed with (13)C solid-state CPMAS NMR and 1D and 2D solution NMR of partially water re-solubilized precipitates. The study revealed that the amount of precipitate obtained is dependent on vintage, temperature and grape variety. Major components identified include potassium bitartrate, polyphenols, polysaccharides, organic acids and free amino acids. No evidence was found for the presence of proteins. The influence of main compounds found in the precipitates is discussed in relation to wine stability. Copyright © 2016. Published by Elsevier Ltd.

  9. Quantitative Characterization of Tissue Microstructure with Temporal Diffusion Spectroscopy

    PubMed Central

    Xu, Junzhong; Does, Mark D.; Gore, John C.

    2009-01-01

    The signals recorded by diffusion-weighted magnetic resonance imaging (DWI) are dependent on the micro-structural properties of biological tissues, so it is possible to obtain quantitative structural information non-invasively from such measurements. Oscillating gradient spin echo (OGSE) methods have the ability to probe the behavior of water diffusion over different time scales and the potential to detect variations in intracellular structure. To assist in the interpretation of OGSE data, analytical expressions have been derived for diffusion-weighted signals with OGSE methods for restricted diffusion in some typical structures, including parallel planes, cylinders and spheres, using the theory of temporal diffusion spectroscopy. These analytical predictions have been confirmed with computer simulations. These expressions suggest how OGSE signals from biological tissues should be analyzed to characterize tissue microstructure, including how to estimate cell nuclear sizes. This approach provides a model to interpret diffusion data obtained from OGSE measurements that can be used for applications such as monitoring tumor response to treatment in vivo. PMID:19616979

  10. Quantitative determination of amorphous cyclosporine in crystalline cyclosporine samples by Fourier transform infrared spectroscopy.

    PubMed

    Bertacche, Vittorio; Pini, Elena; Stradi, Riccardo; Stratta, Fabio

    2006-01-01

    The purpose of this study is the development of a quantification method to detect the amount of amorphous cyclosporine using Fourier transform infrared (FTIR) spectroscopy. The mixing of different percentages of crystalline cyclosporine with amorphous cyclosporine was used to obtain a set of standards, composed of cyclosporine samples characterized by different percentages of amorphous cyclosporine. Using a wavelength range of 450-4,000 cm(-1), FTIR spectra were obtained from samples in potassium bromide pellets and then a partial least squares (PLS) model was exploited to correlate the features of the FTIR spectra with the percentage of amorphous cyclosporine in the samples. This model gave a standard error of estimate (SEE) of 0.3562, with an r value of 0.9971 and a standard error of prediction (SEP) of 0.4168, which derives from the cross validation function used to check the precision of the model. Statistical values reveal the applicability of the method to the quantitative determination of amorphous cyclosporine in crystalline cyclosporine samples.

  11. Evaluation of spatial filtering on the accuracy of wheat area estimate

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Delima, A. M.

    1982-01-01

    A 3 x 3 pixel spatial filter for postclassification was used for wheat classification to evaluate the effects of this procedure on the accuracy of area estimation using LANDSAT digital data obtained from a single pass. Quantitative analyses were carried out in five test sites (approx 40 sq km each) and t tests showed that filtering with threshold values significantly decreased errors of commission and omission. In area estimation filtering improved the overestimate of 4.5% to 2.7% and the root-mean-square error decreased from 126.18 ha to 107.02 ha. Extrapolating the same procedure of automatic classification using spatial filtering for postclassification to the whole study area, the accuracy in area estimate was improved from the overestimate of 10.9% to 9.7%. It is concluded that when single pass LANDSAT data is used for crop identification and area estimation the postclassification procedure using a spatial filter provides a more accurate area estimate by reducing classification errors.

  12. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  13. A hydro-mechanical framework for early warning of rainfall-induced landslides (Invited)

    NASA Astrophysics Data System (ADS)

    Godt, J.; Lu, N.; Baum, R. L.

    2013-12-01

    Landslide early warning requires an estimate of the location, timing, and magnitude of initial movement, and the change in volume and momentum of material as it travels down a slope or channel. In many locations advance assessment of landslide location, volume, and momentum is possible, but prediction of landslide timing entails understanding the evolution of rainfall and soil-water conditions, and consequent effects on slope stability in real time. Existing schemes for landslide prediction generally rely on empirical relations between landslide occurrence and rainfall amount and duration, however, these relations do not account for temporally variable rainfall nor the variably saturated processes that control the hydro-mechanical response of hillside materials to rainfall. Although limited by the resolution and accuracy of rainfall forecasts and now-casts in complex terrain and by the inherent difficulty in adequately characterizing subsurface materials, physics-based models provide a general means to quantitatively link rainfall and landslide occurrence. To obtain quantitative estimates of landslide potential from physics-based models using observed or forecasted rainfall requires explicit consideration of the changes in effective stress that result from changes in soil moisture and pore-water pressures. The physics that control soil-water conditions are transient, nonlinear, hysteretic, and dependent on material composition and history. In order to examine the physical processes that control infiltration and effective stress in variably saturated materials, we present field and laboratory results describing intrinsic relations among soil water and mechanical properties of hillside materials. At the REV (representative elementary volume) scale, the interaction between pore fluids and solid grains can be effectively described by the relation between soil suction, soil water content, hydraulic conductivity, and suction stress. We show that these relations can be obtained independently from outflow, shear strength, and deformation tests for a wide range of earth materials. We then compare laboratory results with measurements of pore pressure and moisture content from landslide-prone settings and demonstrate that laboratory results obtained for hillside materials are representative of field conditions. These fundamental relations provide a basis to combine observed or forecasted rainfall with in-situ measurements of soil water conditions using hydro-mechanical models that simulate transient variably saturated flow and slope stability. We conclude that early warning using an approach in which in-situ observations are used to establish initial conditions for hydro-mechanical models is feasible in areas of high landslide risk where laboratory characterization of materials is practical and accurate rainfall information can be obtained. Analogous to weather and climate forecasting, such models could then be applied in an ensemble fashion to obtain quantitative estimates of landslide probability and error. Application to broader regions likely awaits breakthroughs in the development of remotely sensed proxies of soil properties and subsurface moisture conditions.

  14. Empirical expression for DC magnetization curve of immobilized magnetic nanoparticles for use in biomedical applications

    NASA Astrophysics Data System (ADS)

    Elrefai, Ahmed L.; Sasayama, Teruyoshi; Yoshida, Takashi; Enpuku, Keiji

    2018-05-01

    We studied the magnetization (M-H) curve of immobilized magnetic nanoparticles (MNPs) used for biomedical applications. First, we performed numerical simulation on the DC M-H curve over a wide range of MNPs parameters. Based on the simulation results, we obtained an empirical expression for DC M-H curve. The empirical expression was compared with the measured M-H curves of various MNP samples, and quantitative agreements were obtained between them. We can also estimate the basic parameters of MNP from the comparison. Therefore, the empirical expression is useful for analyzing the M-H curve of immobilized MNPs for specific biomedical applications.

  15. Using CTX Image Features to Predict HiRISE-Equivalent Rock Density

    NASA Technical Reports Server (NTRS)

    Serrano, Navid; Huertas, Andres; McGuire, Patrick; Mayer, David; Ardvidson, Raymond

    2010-01-01

    Methods have been developed to quantitatively assess rock hazards at candidate landing sites with the aid of images from the HiRISE camera onboard NASA s Mars Reconnaissance Orbiter. HiRISE is able to resolve rocks as small as 1-m in diameter. Some sites of interest do not have adequate coverage with the highest resolution sensors and there is a need to infer relevant information (like site safety or underlying geomorphology). The proposed approach would make it possible to obtain rock density estimates at a level close to or equal to those obtained from high-resolution sensors where individual rocks are discernable.

  16. Estimating ankle rotational constraints from anatomic structure

    NASA Astrophysics Data System (ADS)

    Baker, H. H.; Bruckner, Janice S.; Langdon, John H.

    1992-09-01

    Three-dimensional biomedical data obtained through tomography provide exceptional views of biological anatomy. While visualization is one of the primary purposes for obtaining these data, other more quantitative and analytic uses are possible. These include modeling of tissue properties and interrelationships, simulation of physical processes, interactive surgical investigation, and analysis of kinematics and dynamics. As an application of our research in modeling tissue structure and function, we have been working to develop interactive and automated tools for studying joint geometry and kinematics. We focus here on discrimination of morphological variations in the foot and determining the implications of these on both hominid bipedal evolution and physical therapy treatment for foot disorders.

  17. A model of the near-earth plasma environment and application to the ISEE-A and -B orbit

    NASA Technical Reports Server (NTRS)

    Chan, K. W.; Sawyer, K. W.; Vette, J. I.

    1977-01-01

    A model of the near-earth environment to obtain a best estimate of the average flux of protons and electrons in the energy range from 0.1 to 100 keV for the International Sun-Earth Explorer (ISEE)-A and -B spacecraft. The possible radiation damage to the thermal coating on these spinning spacecraft is also studied. Applications of the model to other high-altitude satellites can be obtained with the appropriate orbit averaging. This study is the first attempt to synthesize an overall quantitative environment of low-energy particles for high altitude spacecraft, using data from in situ measurements.

  18. Adsorption of organic compounds onto activated carbons from recycled vegetables biomass.

    PubMed

    Mameli, Anna; Cincotti, Alberto; Lai, Nicola; Crisafulli, Carmelo; Sciré, Salvatore; Cao, Giacomo

    2004-01-01

    The removal of organic species from aqueous solution by activated carbons is investigated. The latter ones are prepared from olive husks and almond shells. A wide range of surface area values are obtained varying temperature and duration of both carbonization and activation steps. The adsorption isotherm of phenol, catechol and 2,6-dichlorophenol involving the activated carbons prepared are obtained at 25 degrees C. The corresponding behavior is quantitatively correlated using classical isotherm, whose parameters are estimated by fitting the equilibrium data. A two component isotherm (phenol/2,6-dichlorophenol) is determined in order to test activated carbon behavior during competitive adsorption.

  19. Selective DNA Pooling for Determination of Linkage between a Molecular Marker and a Quantitative Trait Locus

    PubMed Central

    Darvasi, A.; Soller, M.

    1994-01-01

    Selective genotyping is a method to reduce costs in marker-quantitative trait locus (QTL) linkage determination by genotyping only those individuals with extreme, and hence most informative, quantitative trait values. The DNA pooling strategy (termed: ``selective DNA pooling'') takes this one step further by pooling DNA from the selected individuals at each of the two phenotypic extremes, and basing the test for linkage on marker allele frequencies as estimated from the pooled samples only. This can reduce genotyping costs of marker-QTL linkage determination by up to two orders of magnitude. Theoretical analysis of selective DNA pooling shows that for experiments involving backcross, F(2) and half-sib designs, the power of selective DNA pooling for detecting genes with large effect, can be the same as that obtained by individual selective genotyping. Power for detecting genes with small effect, however, was found to decrease strongly with increase in the technical error of estimating allele frequencies in the pooled samples. The effect of technical error, however, can be markedly reduced by replication of technical procedures. It is also shown that a proportion selected of 0.1 at each tail will be appropriate for a wide range of experimental conditions. PMID:7896115

  20. Comparison of detection limits in environmental analysis--is it possible? An approach on quality assurance in the lower working range by verification.

    PubMed

    Geiss, S; Einax, J W

    2001-07-01

    Detection limit, reporting limit and limit of quantitation are analytical parameters which describe the power of analytical methods. These parameters are used for internal quality assurance and externally for competing, especially in the case of trace analysis in environmental compartments. The wide variety of possibilities for computing or obtaining these measures in literature and in legislative rules makes any comparison difficult. Additionally, a host of terms have been used within the analytical community to describe detection and quantitation capabilities. Without trying to create an order for the variety of terms, this paper is aimed at providing a practical proposal for answering the main questions for the analysts concerning quality measures above. These main questions and related parameters were explained and graphically demonstrated. Estimation and verification of these parameters are the two steps to get real measures. A rule for a practical verification is given in a table, where the analyst can read out what to measure, what to estimate and which criteria have to be fulfilled. In this manner verified parameters detection limit, reporting limit and limit of quantitation now are comparable and the analyst himself is responsible to the unambiguity and reliability of these measures.

  1. On the accuracy of analytical models of impurity segregation during directional melt crystallization and their applicability for quantitative calculations

    NASA Astrophysics Data System (ADS)

    Voloshin, A. E.; Prostomolotov, A. I.; Verezub, N. A.

    2016-11-01

    The paper deals with the analysis of the accuracy of some one-dimensional (1D) analytical models of the axial distribution of impurities in the crystal grown from a melt. The models proposed by Burton-Prim-Slichter, Ostrogorsky-Muller and Garandet with co-authors are considered, these models are compared to the results of a two-dimensional (2D) numerical simulation. Stationary solutions as well as solutions for the initial transient regime obtained using these models are considered. The sources of errors are analyzed, a conclusion is made about the applicability of 1D analytical models for quantitative estimates of impurity incorporation into the crystal sample as well as for the solution of the inverse problems.

  2. Airborne radar and radiometer experiment for quantitative remote measurements of rain

    NASA Technical Reports Server (NTRS)

    Kozu, Toshiaki; Meneghini, Robert; Boncyk, Wayne; Wilheit, Thomas T.; Nakamura, Kenji

    1989-01-01

    An aircraft experiment has been conducted with a dual-frequency (10 GHz and 35 GHz) radar/radiometer system and an 18-GHz radiometer to test various rain-rate retrieval algorithms from space. In the experiment, which took place in the fall of 1988 at the NASA Wallops Flight Facility, VA, both stratiform and convective storms were observed. A ground-based radar and rain gauges were also used to obtain truth data. An external radar calibration is made with rain gauge data, thereby enabling quantitative reflectivity measurements. Comparisons between path attenuations derived from the surface return and from the radar reflectivity profile are made to test the feasibility of a technique to estimate the raindrop size distribution from simultaneous radar and path-attenuation measurements.

  3. Nonequilibrium fluctuations in metaphase spindles: polarized light microscopy, image registration, and correlation functions

    NASA Astrophysics Data System (ADS)

    Brugués, Jan; Needleman, Daniel J.

    2010-02-01

    Metaphase spindles are highly dynamic, nonequilibrium, steady-state structures. We study the internal fluctuations of spindles by computing spatio-temporal correlation functions of movies obtained from quantitative polarized light microscopy. These correlation functions are only physically meaningful if corrections are made for the net motion of the spindle. We describe our image registration algorithm in detail and we explore its robustness. Finally, we discuss the expression used for the estimation of the correlation function in terms of the nematic order of the microtubules which make up the spindle. Ultimately, studying the form of these correlation functions will provide a quantitative test of the validity of coarse-grained models of spindle structure inspired from liquid crystal physics.

  4. Estimation of Vulnerability Functions for Debris Flows Using Different Intensity Parameters

    NASA Astrophysics Data System (ADS)

    Akbas, S. O.; Blahut, J.; Luna, B. Q.; Sterlacchini, S.

    2009-04-01

    In landslide risk research, the majority of past studies have focused on hazard analysis, with only few targeting the concept of vulnerability. When debris flows are considered, there is no consensus or even modest agreement on a generalized methodology to estimate physical vulnerability of the affected buildings. Very few quantitative relationships have been proposed between intensities and vulnerability values. More importantly, in most of the existing relationships, information on process intensity is often missing or only described semi-quantitatively. However, robust assessment of vulnerabilities along with the associated uncertainties is of utmost importance from a quantitative risk analysis point of view. On the morning of 13th July 2008, after more than two days of intense rainfall, several debris and mud flows were released in the central part of Valtellina, an Italian alpine valley in Lombardy Region. One of the largest muddy-debris flows occurred in Selvetta, a fraction of Colorina municipality. The result was the complete destruction of two buildings, and damage at varying severity levels to eight others. The authors had the chance to gather detailed information about the event, by conducting extensive field work and interviews with local inhabitants, civil protection teams, and officials. In addition to the data gathered from the field studies, the main characteristics of the debris flow have been estimated using numerical and empirical approaches. The extensive data obtained from Selvetta event gave an opportunity to develop three separate empirical vulnerability curves, which are functions of deposition height, debris flow velocity, and pressure, respectively. Deposition heights were directly obtained from field surveys, whereas the velocity and pressure values were back-calculated using the finite difference program FLO2D. The vulnerability was defined as the ratio between the monetary loss and the reconstruction value. The monetary losses were obtained from official RASDA documents, which were compiled for claim purposes. For each building, the approximate reconstruction value was calculated according to the building type and size, using the official data given in the Housing Prices Index prepared by the Engineers and Architects of Milan. The resulting vulnerability curves were compared to those in the literature, and among themselves. Specific recommendations were given regarding the most suitable parameter to be used for characterizing the intensity of debris flows within the context of physical vulnerability.

  5. Rapid estimation of nutritional elements on citrus leaves by near infrared reflectance spectroscopy.

    PubMed

    Galvez-Sola, Luis; García-Sánchez, Francisco; Pérez-Pérez, Juan G; Gimeno, Vicente; Navarro, Josefa M; Moral, Raul; Martínez-Nicolás, Juan J; Nieves, Manuel

    2015-01-01

    Sufficient nutrient application is one of the most important factors in producing quality citrus fruits. One of the main guides in planning citrus fertilizer programs is by directly monitoring the plant nutrient content. However, this requires analysis of a large number of leaf samples using expensive and time-consuming chemical techniques. Over the last 5 years, it has been demonstrated that it is possible to quantitatively estimate certain nutritional elements in citrus leaves by using the spectral reflectance values, obtained by using near infrared reflectance spectroscopy (NIRS). This technique is rapid, non-destructive, cost-effective and environmentally friendly. Therefore, the estimation of macro and micronutrients in citrus leaves by this method would be beneficial in identifying the mineral status of the trees. However, to be used effectively NIRS must be evaluated against the standard techniques across different cultivars. In this study, NIRS spectral analysis, and subsequent nutrient estimations for N, K, Ca, Mg, B, Fe, Cu, Mn, and Zn concentration, were performed using 217 leaf samples from different citrus trees species. Partial least square regression and different pre-processing signal treatments were used to generate the best estimation against the current best practice techniques. It was verified a high proficiency in the estimation of N (Rv = 0.99) and Ca (Rv = 0.98) as well as achieving acceptable estimation for K, Mg, Fe, and Zn. However, no successful calibrations were obtained for the estimation of B, Cu, and Mn.

  6. Rapid estimation of nutritional elements on citrus leaves by near infrared reflectance spectroscopy

    PubMed Central

    Galvez-Sola, Luis; García-Sánchez, Francisco; Pérez-Pérez, Juan G.; Gimeno, Vicente; Navarro, Josefa M.; Moral, Raul; Martínez-Nicolás, Juan J.; Nieves, Manuel

    2015-01-01

    Sufficient nutrient application is one of the most important factors in producing quality citrus fruits. One of the main guides in planning citrus fertilizer programs is by directly monitoring the plant nutrient content. However, this requires analysis of a large number of leaf samples using expensive and time-consuming chemical techniques. Over the last 5 years, it has been demonstrated that it is possible to quantitatively estimate certain nutritional elements in citrus leaves by using the spectral reflectance values, obtained by using near infrared reflectance spectroscopy (NIRS). This technique is rapid, non-destructive, cost-effective and environmentally friendly. Therefore, the estimation of macro and micronutrients in citrus leaves by this method would be beneficial in identifying the mineral status of the trees. However, to be used effectively NIRS must be evaluated against the standard techniques across different cultivars. In this study, NIRS spectral analysis, and subsequent nutrient estimations for N, K, Ca, Mg, B, Fe, Cu, Mn, and Zn concentration, were performed using 217 leaf samples from different citrus trees species. Partial least square regression and different pre-processing signal treatments were used to generate the best estimation against the current best practice techniques. It was verified a high proficiency in the estimation of N (Rv = 0.99) and Ca (Rv = 0.98) as well as achieving acceptable estimation for K, Mg, Fe, and Zn. However, no successful calibrations were obtained for the estimation of B, Cu, and Mn. PMID:26257767

  7. Spectrochemical analysis of powdered biological samples using transversely excited atmospheric carbon dioxide laser plasma excitation

    NASA Astrophysics Data System (ADS)

    Zivkovic, Sanja; Momcilovic, Milos; Staicu, Angela; Mutic, Jelena; Trtica, Milan; Savovic, Jelena

    2017-02-01

    The aim of this study was to develop a simple laser induced breakdown spectroscopy (LIBS) method for quantitative elemental analysis of powdered biological materials based on laboratory prepared calibration samples. The analysis was done using ungated single pulse LIBS in ambient air at atmospheric pressure. Transversely-Excited Atmospheric pressure (TEA) CO2 laser was used as an energy source for plasma generation on samples. The material used for the analysis was a blue-green alga Spirulina, widely used in food and pharmaceutical industries and also in a few biotechnological applications. To demonstrate the analytical potential of this particular LIBS system the obtained spectra were compared to the spectra obtained using a commercial LIBS system based on pulsed Nd:YAG laser. A single sample of known concentration was used to estimate detection limits for Ba, Ca, Fe, Mg, Mn, Si and Sr and compare detection power of these two LIBS systems. TEA CO2 laser based LIBS was also applied for quantitative analysis of the elements in powder Spirulina samples. Analytical curves for Ba, Fe, Mg, Mn and Sr were constructed using laboratory produced matrix-matched calibration samples. Inductively coupled plasma optical emission spectroscopy (ICP-OES) was used as the reference technique for elemental quantification, and reasonably well agreement between ICP and LIBS data was obtained. Results confirm that, in respect to its sensitivity and precision, TEA CO2 laser based LIBS can be successfully applied for quantitative analysis of macro and micro-elements in algal samples. The fact that nearly all classes of materials can be prepared as powders implies that the proposed method could be easily extended to a quantitative analysis of different kinds of materials, organic, biological or inorganic.

  8. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  9. Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.

    PubMed

    Arora, Naveen Kumar; Verma, Maya

    2017-12-01

    In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.

  10. Estimating Photosynthetically Available Radiation (PAR) at the Earth's surface from satellite observations

    NASA Technical Reports Server (NTRS)

    Frouin, Robert

    1993-01-01

    Current satellite algorithms to estimate photosynthetically available radiation (PAR) at the earth' s surface are reviewed. PAR is deduced either from an insolation estimate or obtained directly from top-of-atmosphere solar radiances. The characteristics of both approaches are contrasted and typical results are presented. The inaccuracies reported, about 10 percent and 6 percent on daily and monthly time scales, respectively, are useful to model oceanic and terrestrial primary productivity. At those time scales variability due to clouds in the ratio of PAR and insolation is reduced, making it possible to deduce PAR directly from insolation climatologies (satellite or other) that are currently available or being produced. Improvements, however, are needed in conditions of broken cloudiness and over ice/snow. If not addressed properly, calibration/validation issues may prevent quantitative use of the PAR estimates in studies of climatic change. The prospects are good for an accurate, long-term climatology of PAR over the globe.

  11. Quantitative FLASH MRI at 3T using a rational approximation of the Ernst equation.

    PubMed

    Helms, Gunther; Dathe, Henning; Dechent, Peter

    2008-03-01

    From the half-angle substitution of trigonometric terms in the Ernst equation, rational approximations of the flip angle dependence of the FLASH signal can be derived. Even the rational function of the lowest order was in good agreement with the experiment for flip angles up to 20 degrees . Three-dimensional maps of the signal amplitude and longitudinal relaxation rates in human brain were obtained from eight subjects by dual-angle measurements at 3T (nonselective 3D-FLASH, 7 degrees and 20 degrees flip angle, TR = 30 ms, isotropic resolution of 0.95 mm, each 7:09 min). The corresponding estimates of T1 and signal amplitude are simple algebraic expressions and deviated about 1% from the exact solution. They are ill-conditioned to estimate the local flip angle deviation but can be corrected post hoc by division of squared RF maps obtained by independent measurements. Local deviations from the nominal flip angles strongly affected the relaxation estimates and caused considerable blurring of the T1 histograms. (c) 2008 Wiley-Liss, Inc.

  12. Characterization of dynamics in complex lyophilized formulations: I. Comparison of relaxation times measured by isothermal calorimetry with data estimated from the width of the glass transition temperature region.

    PubMed

    Chieng, Norman; Mizuno, Masayasu; Pikal, Michael

    2013-10-01

    The purposes of this study are to characterize the relaxation dynamics in complex freeze dried formulations and to investigate the quantitative relationship between the structural relaxation time as measured by thermal activity monitor (TAM) and that estimated from the width of the glass transition temperature (ΔT(g)). The latter method has advantages over TAM because it is simple and quick. As part of this objective, we evaluate the accuracy in estimating relaxation time data at higher temperatures (50 °C and 60 °C) from TAM data at lower temperature (40 °C) and glass transition region width (ΔT(g)) data obtained by differential scanning calorimetry. Formulations studied here were hydroxyethyl starch (HES)-disaccharide, HES-polyol, and HES-disaccharide-polyol at various ratios. We also re-examine, using TAM derived relaxation times, the correlation between protein stability (human growth hormone, hGH) and relaxation times explored in a previous report, which employed relaxation time data obtained from ΔT(g). Results show that most of the freeze dried formulations exist in single amorphous phase, and structural relaxation times were successfully measured for these systems. We find a reasonably good correlation between TAM measured relaxation times and corresponding data obtained from estimates based on ΔT(g), but the agreement is only qualitative. The comparison plot showed that TAM data are directly proportional to the 1/3 power of ΔT(g) data, after correcting for an offset. Nevertheless, the correlation between hGH stability and relaxation time remained qualitatively the same as found with using ΔT(g) derived relaxation data, and it was found that the modest extrapolation of TAM data to higher temperatures using ΔT(g) method and TAM data at 40 °C resulted in quantitative agreement with TAM measurements made at 50 °C and 60 °C, provided the TAM experiment temperature, is well below the Tg of the sample. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Eto, Shuzo; Matsuo, Toyofumi; Matsumura, Takuro; Fujii, Takashi; Tanaka, Masayoshi Y.

    2014-11-01

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm2 within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously.

  14. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.

  15. A systematic investigation of aluminium ion speciation at high temperature. Part 1. Solution studies.

    PubMed

    Shafran, Kirill L; Perry, Carole C

    2005-06-21

    Speciation diagrams of aluminium ions in aqueous solution (0.2 M) at high temperature (90 degrees C) have been obtained from 48 h time-resolved multi-batch titration experiments monitored by 27Al NMR spectroscopy, potentiometry and dynamic light scattering. The quantitative speciation patterns and kinetic data obtained offer a dynamic picture of the distribution of soluble and insoluble Al species as a function of hydrolysis ratio h(h=[OH-]/[Al3+]) over a very broad range of conditions (-1.0 < or =h < or = 4.0). Monomeric, small oligomeric, tridecameric (the 'Al13-mer') and the recently characterised 30-meric aluminium species (the 'Al30-mer') as well as aluminium hydroxide have been identified and quantified. The Al13-mer species dominates over a relatively broad range of hydrolysis ratios (1.5 < or =h< or = 2.7) during the first 6 h of experiment, but are gradually replaced by Al30-mers at longer reaction times. Kinetic profiles indicate that the formation of the Al30-mer is limited by the disappearance of the Al13 species at mildly acidic conditions. The estimated rate constants of both hydrolytic processes show good internal correlation at h> or = 1.5. The effect of local perturbations leading to the formation of aluminium hydroxide below the electroneutrality point (h= 3.0) has been estimated quantitatively.

  16. Fuel Consumption and Fire Emissions Estimates in Siberia: Impact of Vegetation Types, Meteorological Conditions, Forestry Practices and Fire Regimes

    NASA Astrophysics Data System (ADS)

    Kukavskaya, Elena; Conard, Susan; Ivanova, Galina; Buryak, Ludmila; Soja, Amber; Zhila, Sergey

    2015-04-01

    Boreal forests play a crucial role in carbon budgets with Siberian carbon fluxes and pools making a major contribution to the regional and global carbon cycle. Wildfire is the main ecological disturbance in Siberia that leads to changes in forest species composition and structure and in carbon storage, as well as direct emissions of greenhouse gases and aerosols to the atmosphere. At present, the global scientific community is highly interested in quantitative and accurate estimates of fire emissions. Little research on wildland fuel consumption and carbon emission estimates has been carried out in Russia until recently. From 2000 to 2007 we conducted a series of experimental fires of varying fireline intensity in light-coniferous forest of central Siberia to obtain quantitative and qualitative data on fire behavior and carbon emissions due to fires of known behavior. From 2009 to 2013 we examined a number of burned logged areas to assess the potential impact of forest practices on fire emissions. In 2013-2014 burned areas in dark-coniferous and deciduous forests were examined to determine fuel consumption and carbon emissions. We have combined and analyzed the scarce data available in the literature with data obtained in the course of our long-term research to determine the impact of various factors on fuel consumption and to develop models of carbon emissions for different ecosystems of Siberia. Carbon emissions varied drastically (from 0.5 to 40.9 tC/ha) as a function of vegetation type, weather conditions, anthropogenic effects and fire behavior characteristics and periodicity. Our study provides a basis for better understanding of the feedbacks between wildland fire emissions and changing anthropogenic disturbance patterns and climate. The data obtained could be used by air quality agencies to calculate local emissions and by managers to develop strategies to mitigate negative smoke impacts on the environmentand human health.

  17. Micro-CT based finite element models for elastic properties of glass-ceramic scaffolds.

    PubMed

    Tagliabue, Stefano; Rossi, Erica; Baino, Francesco; Vitale-Brovarone, Chiara; Gastaldi, Dario; Vena, Pasquale

    2017-01-01

    In this study, the mechanical properties of porous glass-ceramic scaffolds are investigated by means of three-dimensional finite element models based on micro-computed tomography (micro-CT) scan data. In particular, the quantitative relationship between the morpho-architectural features of the obtained scaffolds, such as macroscopic porosity and strut thickness, and elastic properties, is sought. The macroscopic elastic properties of the scaffolds have been obtained through numerical homogenization approaches using the mechanical characteristics of the solid walls of the scaffolds (assessed through nanoindentation) as input parameters for the numerical simulations. Anisotropic mechanical properties of the produced scaffolds have also been investigated by defining a suitable anisotropy index. A comparison with morphological data obtained through the micro-CT scans is also presented. The proposed study shows that the produced glass-ceramic scaffolds exhibited a macroscopic porosity ranging between 29% and 97% which corresponds to an average stiffness ranging between 42.4GPa and 36MPa. A quantitative estimation of the isotropy of the macroscopic elastic properties has been performed showing that the samples with higher solid fractions were those closest to an isotropic material. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Mapping Bone Mineral Density Obtained by Quantitative Computed Tomography to Bone Volume Fraction

    NASA Technical Reports Server (NTRS)

    Pennline, James A.; Mulugeta, Lealem

    2017-01-01

    Methods for relating or mapping estimates of volumetric Bone Mineral Density (vBMD) obtained by Quantitative Computed Tomography to Bone Volume Fraction (BVF) are outlined mathematically. The methods are based on definitions of bone properties, cited experimental studies and regression relations derived from them for trabecular bone in the proximal femur. Using an experimental range of values in the intertrochanteric region obtained from male and female human subjects, age 18 to 49, the BVF values calculated from four different methods were compared to the experimental average and numerical range. The BVF values computed from the conversion method used data from two sources. One source provided pre bed rest vBMD values in the intertrochanteric region from 24 bed rest subject who participated in a 70 day study. Another source contained preflight vBMD values from 18 astronauts who spent 4 to 6 months on the ISS. To aid the use of a mapping from BMD to BVF, the discussion includes how to formulate them for purpose of computational modeling. An application of the conversions would be used to aid in modeling of time varying changes in vBMD as it relates to changes in BVF via bone remodeling and/or modeling.

  19. Deaths averted by influenza vaccination in the U.S. during the seasons 2005/06 through 2013/14.

    PubMed

    Foppa, Ivo M; Cheng, Po-Yung; Reynolds, Sue B; Shay, David K; Carias, Cristina; Bresee, Joseph S; Kim, Inkyu K; Gambhir, Manoj; Fry, Alicia M

    2015-06-12

    Excess mortality due to seasonal influenza is substantial, yet quantitative estimates of the benefit of annual vaccination programs on influenza-associated mortality are lacking. We estimated the numbers of deaths averted by vaccination in four age groups (0.5 to 4, 5 to 19, 20 to 64 and ≥65 yrs.) for the nine influenza seasons from 2005/6 through 2013/14. These estimates were obtained using a Monte Carlo approach applied to weekly U.S. age group-specific estimates of influenza-associated excess mortality, monthly vaccination coverage estimates and summary seasonal influenza vaccine effectiveness estimates to obtain estimates of the number of deaths averted by vaccination. The estimates are conservative as they do not include indirect vaccination effects. From August, 2005 through June, 2014, we estimated that 40,127 (95% confidence interval [CI] 25,694 to 59,210) deaths were averted by influenza vaccination. We found that of all studied seasons the most deaths were averted by influenza vaccination during the 2012/13 season (9398; 95% CI 2,386 to 19,897) and the fewest during the 2009/10 pandemic (222; 95% CI 79 to 347). Of all influenza-associated deaths averted, 88.9% (95% CI 83 to 92.5%) were in people ≥65 yrs. old. The estimated number of deaths averted by the US annual influenza vaccination program is considerable, especially among elderly adults and even when vaccine effectiveness is modest, such as in the 2012/13 season. As indirect effects ("herd immunity") of vaccination are ignored, these estimates represent lower bound estimates and are thus conservative given valid excess mortality estimates. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Modified HS-SPME for determination of quantitative relations between low-molecular oxygen compounds in various matrices.

    PubMed

    Dawidowicz, Andrzej L; Szewczyk, Joanna; Dybowski, Michal P

    2016-09-07

    Similar quantitative relations between individual constituents of the liquid sample established by its direct injection can be obtained applying Polydimethylsiloxane (PDMS) fiber in the headspace solid phase microextraction (HS-SPME) system containing the examined sample suspended in methyl silica oil. This paper proves that the analogous system composed of sample suspension/emulsion in polyethylene glycol (PEG) and Carbowax fiber allows to get similar quantitative relations between components of the mixture as those established by its direct analysis, but only for polar constituents. It is demonstrated for essential oil (EO) components of savory, sage, mint and thyme, and of artificial liquid mixture of polar constituents. The observed differences in quantitative relations between polar constituents estimated by both applied procedures are insignificant (Fexp < Fcrit). The presented results indicates that wider applicability of the system composed of a sample suspended in the oil of the same physicochemical character as that of used SPME fiber coating strongly depends on the character of interactions between analytes-suspending liquid and analytes-fiber coating. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Partitioning of organophosphorus pesticides into phosphatidylcholine small unilamellar vesicles studied by second-derivative spectrophotometry.

    PubMed

    Takegami, Shigehiko; Kitamura, Keisuke; Ohsugi, Mayuko; Ito, Aya; Kitade, Tatsuya

    2015-06-15

    In order to quantitatively examine the lipophilicity of the widely used organophosphorus pesticides (OPs) chlorfenvinphos (CFVP), chlorpyrifos-methyl (CPFM), diazinon (DZN), fenitrothion (FNT), fenthion (FT), isofenphos (IFP), profenofos (PFF) and pyraclofos (PCF), their partition coefficient (Kp) values between phosphatidylcholine (PC) small unilamellar vesicles (SUVs) and water (liposome-water system) were determined by second-derivative spectrophotometry. The second-derivative spectra of these OPs in the presence of PC SUV showed a bathochromic shift according to the increase in PC concentration and distinct derivative isosbestic points, demonstrating the complete elimination of the residual background signal effects that were observed in the absorption spectra. The Kp values were calculated from the second-derivative intensity change induced by addition of PC SUV and obtained with a good precision of R.S.D. below 10%. The Kp values were in the order of CPFM>FT>PFF>PCF>IFP>CFVP>FNT⩾DZN and did not show a linear correlation relationship with the reported partition coefficients obtained using an n-octanol-water system (R(2)=0.530). Also, the results quantitatively clarified the effect of chemical-group substitution in OPs on their lipophilicity. Since the partition coefficient for the liposome-water system is more effective for modeling the quantitative structure-activity relationship than that for the n-octanol-water system, the obtained results are toxicologically important for estimating the accumulation of these OPs in human cell membranes. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Two-way and three-way approaches to ultra high performance liquid chromatography-photodiode array dataset for the quantitative resolution of a two-component mixture containing ciprofloxacin and ornidazole.

    PubMed

    Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda

    2016-09-01

    Two-way and three-way calibration models were applied to ultra high performance liquid chromatography with photodiode array data with coeluted peaks in the same wavelength and time regions for the simultaneous quantitation of ciprofloxacin and ornidazole in tablets. The chromatographic data cube (tensor) was obtained by recording chromatographic spectra of the standard and sample solutions containing ciprofloxacin and ornidazole with sulfadiazine as an internal standard as a function of time and wavelength. Parallel factor analysis and trilinear partial least squares were used as three-way calibrations for the decomposition of the tensor, whereas three-way unfolded partial least squares was applied as a two-way calibration to the unfolded dataset obtained from the data array of ultra high performance liquid chromatography with photodiode array detection. The validity and ability of two-way and three-way analysis methods were tested by analyzing validation samples: synthetic mixture, interday and intraday samples, and standard addition samples. Results obtained from two-way and three-way calibrations were compared to those provided by traditional ultra high performance liquid chromatography. The proposed methods, parallel factor analysis, trilinear partial least squares, unfolded partial least squares, and traditional ultra high performance liquid chromatography were successfully applied to the quantitative estimation of the solid dosage form containing ciprofloxacin and ornidazole. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Quantitative genetic tools for insecticide resistance risk assessment: estimating the heritability of resistance

    Treesearch

    Michael J. Firko; Jane Leslie Hayes

    1990-01-01

    Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...

  4. Evaluation of micron size carbon fibers released from burning graphite composites

    NASA Technical Reports Server (NTRS)

    Sussholz, B.

    1980-01-01

    Quantitative estimates were developed of micron carbon fibers released during the burning of graphite composites. Evidence was found of fibrillated particles which were the predominant source of the micron fiber data obtained from large pool fire tests. The fibrillation phenomena were attributed to fiber oxidation effects caused by the fire environment. Analysis of propane burn test records indicated that wind sources can cause considerable carbon fiber oxidation. Criteria estimates were determined for the number of micron carbon fibers released during an aircraft accident. An extreme case analysis indicated that the upper limit of the micron carbon fiber concentration level was only about half the permissible asbestos ceiling concentration level.

  5. Structure and interactions of fully hydrated dioleoylphosphatidylcholine bilayers.

    PubMed Central

    Tristram-Nagle, S; Petrache, H I; Nagle, J F

    1998-01-01

    This study focuses on dioleoylphosphatidylcholine (DOPC) bilayers near full hydration. Volumetric data and high-resolution synchrotron x-ray data are used in a method that compares DOPC with well determined gel phase dipalmitoylphosphatidylcholine (DPPC). The key structural quantity obtained is fully hydrated area/lipid A0 = 72.2 +/- 1.1 A2 at 30 degrees C, from which other quantities such as thickness of the bilayer are obtained. Data for samples over osmotic pressures from 0 to 56 atmospheres give an estimate for the area compressibility of KA = 188 dyn/cm. Obtaining the continuous scattering transform and electron density profiles requires correction for liquid crystal fluctuations. Quantitation of these fluctuations opens an experimental window on the fluctuation pressure, the primary repulsive interaction near full hydration. The fluctuation pressure decays exponentially with water spacing, in agreement with analytical results for soft confinement. However, the ratio of decay length lambda(fl) = 5.8 A to hydration pressure decay length lambda = 2.2 A is significantly larger than the value of 2 predicted by analytical theory and close to the ratio obtained in recent simulations. We also obtain the traditional osmotic pressure versus water spacing data. Our analysis of these data shows that estimates of the Hamaker parameter H and the bending modulus Kc are strongly coupled. PMID:9675192

  6. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  7. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  8. Methods to estimate the transfer of contaminants into recycling products - A case study from Austria.

    PubMed

    Knapp, Julika; Allesch, Astrid; Müller, Wolfgang; Bockreis, Anke

    2017-11-01

    Recycling of waste materials is desirable to reduce the consumption of limited primary resources, but also includes the risk of recycling unwanted, hazardous substances. In Austria, the legal framework demands secondary products must not present a higher risk than comparable products derived from primary resources. However, the act provides no definition on how to assess this risk potential. This paper describes the development of different quantitative and qualitative methods to estimate the transfer of contaminants in recycling processes. The quantitative methods comprise the comparison of concentrations of harmful substances in recycling products to corresponding primary products and to existing limit values. The developed evaluation matrix, which considers further aspects, allows for the assessment of the qualitative risk potential. The results show that, depending on the assessed waste fraction, particular contaminants can be critical. Their concentrations were higher than in comparable primary materials and did not comply with existing limit values. On the other hand, the results show that a long-term, well-established quality control system can assure compliance with the limit values. The results of the qualitative assessment obtained with the evaluation matrix support the results of the quantitative assessment. Therefore, the evaluation matrix can be suitable to quickly screen waste streams used for recycling to estimate their potential environmental and health risks. To prevent the transfer of contaminants into product cycles, improved data of relevant substances in secondary resources are necessary. In addition, regulations for material recycling are required to assure adequate quality control measures, including limit values. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Development of quantitative exposure data for a pooled exposure-response analysis of 10 silica cohorts.

    PubMed

    Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa

    2002-08-01

    Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.

  10. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment.

    PubMed

    Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja

    2016-11-01

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.

  11. Development of combination tapered fiber-optic biosensor dip probe for quantitative estimation of interleukin-6 in serum samples

    NASA Astrophysics Data System (ADS)

    Wang, Chun Wei; Manne, Upender; Reddy, Vishnu B.; Oelschlager, Denise K.; Katkoori, Venkat R.; Grizzle, William E.; Kapoor, Rakesh

    2010-11-01

    A combination tapered fiber-optic biosensor (CTFOB) dip probe for rapid and cost-effective quantification of proteins in serum samples has been developed. This device relies on diode laser excitation and a charged-coupled device spectrometer and functions on a technique of sandwich immunoassay. As a proof of principle, this technique was applied in a quantitative estimation of interleukin IL-6. The probes detected IL-6 at picomolar levels in serum samples obtained from a patient with lupus, an autoimmune disease, and a patient with lymphoma. The estimated concentration of IL-6 in the lupus sample was 5.9 +/- 0.6 pM, and in the lymphoma sample, it was below the detection limit. These concentrations were verified by a procedure involving bead-based xMAP technology. A similar trend in the concentrations was observed. The specificity of the CTFOB dip probes was assessed by analysis with receiver operating characteristics. This analysis suggests that the dip probes can detect 5-pM or higher concentration of IL-6 in these samples with specificities of 100%. The results provide information for guiding further studies in the utilization of these probes to quantify other analytes in body fluids with high specificity and sensitivity.

  12. Quantitative phase analysis and microstructure characterization of magnetite nanocrystals obtained by microwave assisted non-hydrolytic sol–gel synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sciancalepore, Corrado, E-mail: corrado.sciancalepore@unimore.it; Bondioli, Federica; INSTM Consortium, Via G. Giusti 9, 51121 Firenze

    2015-02-15

    An innovative preparation procedure, based on microwave assisted non-hydrolytic sol–gel synthesis, to obtain spherical magnetite nanoparticles was reported together with a detailed quantitative phase analysis and microstructure characterization of the synthetic products. The nanoparticle growth was analyzed as a function of the synthesis time and was described in terms of crystallization degree employing the Rietveld method on the magnetic nanostructured system for the determination of the amorphous content using hematite as internal standard. Product crystallinity increases as the microwave thermal treatment is increased and reaches very high percentages for synthesis times longer than 1 h. Microstructural evolution of nanocrystals wasmore » followed by the integral breadth methods to obtain information on the crystallite size-strain distribution. The results of diffraction line profile analysis were compared with nanoparticle grain distribution estimated by dimensional analysis of the transmission electron microscopy (TEM) images. A variation both in the average grain size and in the distribution of the coherently diffraction domains is evidenced, allowing to suppose a relationship between the two quantities. The traditional integral breadth methods have proven to be valid for a rapid assessment of the diffraction line broadening effects in the above-mentioned nanostructured systems and the basic assumption for the correct use of these methods are discussed as well. - Highlights: • Fe{sub 3}O{sub 4} nanocrystals were obtained by MW-assisted non-hydrolytic sol–gel synthesis. • Quantitative phase analysis revealed that crystallinity up to 95% was reached. • The strategy of Rietveld refinements was discussed in details. • Dimensional analysis showed nanoparticles ranging from 4 to 8 nm. • Results of integral breadth methods were compared with microscopic analysis.« less

  13. Functional stability of cerebral circulatory system

    NASA Technical Reports Server (NTRS)

    Moskalenko, Y. Y.

    1980-01-01

    The functional stability of the cerebral circulation system seems to be based on the active mechanisms and on those stemming from specific of the biophysical structure of the system under study. This latter parameter has some relevant criteria for its quantitative estimation. The data obtained suggest that the essential part of the mechanism for active responses of cerebral vessels which maintains the functional stability of this portion of the vascular system, consists of a neurogenic component involving central nervous structures localized, for instance, in the medulla oblongata.

  14. Sources and concentrations of aldehydes and ketones in indoor environments in the UK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crump, D.R.; Gardiner, D.

    1989-01-01

    Individual aldehydes and ketones can be separated, identified and quantitatively estimated by trapping the 2,4-dinitrophenylhydrazine (DNPH) derivatives and analysis by HPLC. Appropriate methods and detection limits are reported. Many sources of formaldehyde have been identified by this means and some are found to emit other aldehydes and ketones. The application of this method to determine the concentration of these compounds in the atmospheres of buildings is described and the results compared with those obtained using chromotropic acid or MBTH.

  15. Fully Automated Quantitative Estimation of Volumetric Breast Density from Digital Breast Tomosynthesis Images: Preliminary Results and Comparison with Digital Mammography and MR Imaging

    PubMed Central

    Pertuz, Said; McDonald, Elizabeth S.; Weinstein, Susan P.; Conant, Emily F.

    2016-01-01

    Purpose To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Materials and Methods Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board–approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration–cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Results Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging–based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Conclusion Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment. © RSNA, 2015 Online supplemental material is available for this article. PMID:26491909

  16. Estimation of the Scatterer Distribution of the Cirrhotic Liver using Ultrasonic Image

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hachiya, Hiroyuki

    1998-05-01

    In the B-mode image of the liver obtained by an ultrasonic imaging system, the speckled pattern changes with the progression of the disease such as liver cirrhosis.In this paper we present the statistical characteristics of the echo envelope of the liver, and the technique to extract information of the scatterer distribution from the normal and cirrhotic liver images using constant false alarm rate (CFAR) processing.We analyze the relationship between the extracted scatterer distribution and the stage of liver cirrhosis. The ratio of the area in which the amplitude of the processing signal is more than the threshold to the entire processed image area is related quantitatively to the stage of liver cirrhosis.It is found that the proposed technique is valid for the quantitative diagnosis of liver cirrhosis.

  17. Heritability and quantitative genetic divergence of serotiny, a fire-persistence plant trait

    PubMed Central

    Hernández-Serrano, Ana; Verdú, Miguel; Santos-del-Blanco, Luís; Climent, José; González-Martínez, Santiago C.; Pausas, Juli G.

    2014-01-01

    Background and Aims Although it is well known that fire acts as a selective pressure shaping plant phenotypes, there are no quantitative estimates of the heritability of any trait related to plant persistence under recurrent fires, such as serotiny. In this study, the heritability of serotiny in Pinus halepensis is calculated, and an evaluation is made as to whether fire has left a selection signature on the level of serotiny among populations by comparing the genetic divergence of serotiny with the expected divergence of neutral molecular markers (QST–FST comparison). Methods A common garden of P. halepensis was used, located in inland Spain and composed of 145 open-pollinated families from 29 provenances covering the entire natural range of P. halepensis in the Iberian Peninsula and Balearic Islands. Narrow-sense heritability (h2) and quantitative genetic differentiation among populations for serotiny (QST) were estimated by means of an ‘animal model’ fitted by Bayesian inference. In order to determine whether genetic differentiation for serotiny is the result of differential natural selection, QST estimates for serotiny were compared with FST estimates obtained from allozyme data. Finally, a test was made of whether levels of serotiny in the different provenances were related to different fire regimes, using summer rainfall as a proxy for fire regime in each provenance. Key Results Serotiny showed a significant narrow-sense heritability (h2) of 0·20 (credible interval 0·09–0·40). Quantitative genetic differentiation among provenances for serotiny (QST = 0·44) was significantly higher than expected under a neutral process (FST = 0·12), suggesting adaptive differentiation. A significant negative relationship was found between the serotiny level of trees in the common garden and summer rainfall of their provenance sites. Conclusions Serotiny is a heritable trait in P. halepensis, and selection acts on it, giving rise to contrasting serotiny levels among populations depending on the fire regime, and supporting the role of fire in generating genetic divergence for adaptive traits. PMID:25008363

  18. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  19. Fitness to work of astronauts in conditions of action of the extreme emotional factors

    NASA Astrophysics Data System (ADS)

    Prisniakova, L. M.

    2004-01-01

    The theoretical model for the quantitative determination of influence of a level of emotional exertion on the success of human activity is presented. The learning curves of fixed words in the groups with a different level of the emotional exertion are analyzed. The obtained magnitudes of time constant T depending on a type of the emotional exertion are a quantitative measure of the emotional exertion. Time constants could also be of use for a prediction of the characteristic of fitness to work of an astronaut in conditions of extreme factors. The inverse of the sign of influencing on efficiency of activity of the man is detected. The paper offers a mathematical model of the relation between successful activity and motivations or the emotional exertion (Yerkes-Dodson law). Proposed models can serve by the theoretical basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of their selection.

  20. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  1. Fitness to work of astronauts in conditions of action of the extreme emotional factors.

    PubMed

    Prisniakova, L M

    2004-01-01

    The theoretical model for the quantitative determination of influence of a level of emotional exertion on the success of human activity is presented. The learning curves of fixed words in the groups with a different level of the emotional exertion are analyzed. The obtained magnitudes of time constant T depending on a type of the emotional exertion are a quantitative measure of the emotional exertion. Time constants could also be of use for a prediction of the characteristic of fitness to work of an astronaut in conditions of extreme factors. The inverse of the sign of influencing on efficiency of activity of the man is detected. The paper offers a mathematical model of the relation between successful activity and motivations or the emotional exertion (Yerkes-Dodson law). Proposed models can serve by the theoretical basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of their selection. Published by Elsevier Ltd on behalf of COSPAR.

  2. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  3. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  4. Kinetic characterisation of primer mismatches in allele-specific PCR: a quantitative assessment.

    PubMed

    Waterfall, Christy M; Eisenthal, Robert; Cobb, Benjamin D

    2002-12-20

    A novel method of estimating the kinetic parameters of Taq DNA polymerase during rapid cycle PCR is presented. A model was constructed using a simplified sigmoid function to represent substrate accumulation during PCR in combination with the general equation describing high substrate inhibition for Michaelis-Menten enzymes. The PCR progress curve was viewed as a series of independent reactions where initial rates were accurately measured for each cycle. Kinetic parameters were obtained for allele-specific PCR (AS-PCR) amplification to examine the effect of mismatches on amplification. A high degree of correlation was obtained providing evidence of substrate inhibition as a major cause of the plateau phase that occurs in the later cycles of PCR.

  5. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.

  6. Dynamics and estimates of growth and loss rates of bacterioplankton in a temperate freshwater system.

    PubMed

    Jugnia, Louis-B; Sime-Ngando, Télesphore; Gilbert, Daniel

    2006-10-01

    The growth rate and losses of bacterioplankton in the epilimnion of an oligo-mesotrophic reservoir were simultaneously estimated using three different methods for each process. Bacterial production was determined by means of the tritiated thymidine incorporation method, the dialysis bag method and the dilution method, while bacterial mortality was assessed with the dilution method, the disappearance of thymidine-labeled natural cells and ingestion of fluorescent bacterial tracers by heterotrophic flagellates. The different methods used to estimate bacterial growth rates yielded similar results. On the other hand, the mortality rates obtained with the dilution method were significantly lower than those obtained with the use of thymidine-labeled natural cells. The bacterial ingestion rate by flagellates accounted on average for 39% of total bacterial mortality estimated by the dilution method, but this value fell to 5% when the total mortality was measured by the thymidine-labeling method. Bacterial abundance and production varied in opposite phase to flagellate abundance and the various bacterial mortality rates. All this points to the critical importance of methodological aspects in the elaboration of quantitative models of matter and energy flows over the time through microbial trophic networks in aquatic systems, and highlights the role of bacterioplankton as a source of carbon for higher trophic levels in the studied system.

  7. Development of estimation system of knee extension strength using image features in ultrasound images of rectus femoris

    NASA Astrophysics Data System (ADS)

    Murakami, Hiroki; Watanabe, Tsuneo; Fukuoka, Daisuke; Terabayashi, Nobuo; Hara, Takeshi; Muramatsu, Chisako; Fujita, Hiroshi

    2016-04-01

    The word "Locomotive syndrome" has been proposed to describe the state of requiring care by musculoskeletal disorders and its high-risk condition. Reduction of the knee extension strength is cited as one of the risk factors, and the accurate measurement of the strength is needed for the evaluation. The measurement of knee extension strength using a dynamometer is one of the most direct and quantitative methods. This study aims to develop a system for measuring the knee extension strength using the ultrasound images of the rectus femoris muscles obtained with non-invasive ultrasonic diagnostic equipment. First, we extract the muscle area from the ultrasound images and determine the image features, such as the thickness of the muscle. We combine these features and physical features, such as the patient's height, and build a regression model of the knee extension strength from training data. We have developed a system for estimating the knee extension strength by applying the regression model to the features obtained from test data. Using the test data of 168 cases, correlation coefficient value between the measured values and estimated values was 0.82. This result suggests that this system can estimate knee extension strength with high accuracy.

  8. 3D-quantitative structure-activity relationship studies on benzothiadiazepine hydroxamates as inhibitors of tumor necrosis factor-alpha converting enzyme.

    PubMed

    Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram

    2008-04-01

    A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.

  9. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model

    PubMed Central

    Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro

    2017-01-01

    In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software “Kongoh” for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1–4 persons’ contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI’s contribution in true contributors and non-contributors by using 2–4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI’s contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples. PMID:29149210

  10. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    PubMed

    Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji

    2017-01-01

    In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  11. Overestimation of the Legionella spp. load in environmental samples by quantitative real-time PCR: pretreatment with propidium monoazide as a tool for the assessment of an association between Legionella concentration and sanitary risk.

    PubMed

    Ditommaso, Savina; Ricciardi, Elisa; Giacomuzzi, Monica; Arauco Rivera, Susan R; Ceccarelli, Adriano; Zotti, Carla M

    2014-12-01

    Quantitative polymerase chain reaction (qPCR) offers rapid, sensitive, and specific detection of Legionella in environmental water samples. In this study, qPCR and qPCR combined with propidium monoazide (PMA-qPCR) were both applied to hot-water system samples and compared to traditional culture techniques. In addition, we evaluated the ability of PMA-qPCR to monitor the efficacy of different disinfection strategies. Comparison between the quantification obtained by culture and by qPCR or PMA-qPCR on environmental water samples confirms that the concentration of Legionella estimated by GU/L is generally higher than that estimated in CFU/L. Our results on 57 hot-water-system samples collected from 3 different sites show that: i) qPCR results were on average 178-fold higher than the culture results (Δ log10=2.25), ii) PMA-qPCR results were on average 27-fold higher than the culture results (Δ log10=1.43), iii) propidium monoazide-induced signal reduction in qPCR were nearly 10-fold (Δ log10=0.95), and that iv) different degrees of correlations between the 3 methods might be explained by different matrix properties, but also by different disinfection methods affecting cultivability of Legionella. In our study, we calculated the logarithmic differences between the results obtained by PMA-qPCR and those obtained by culture, and we suggested an algorithm for the interpretation of PMA-qPCR results for the routine monitoring of healthcare water systems using a commercial qPCR system (iQ-check real-time PCR kit; Bio-Rad, Marnes-la-Coquette, France). Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Quantitative Estimation of Seismic Velocity Changes Using Time-Lapse Seismic Data and Elastic-Wave Sensitivity Approach

    NASA Astrophysics Data System (ADS)

    Denli, H.; Huang, L.

    2008-12-01

    Quantitative monitoring of reservoir property changes is essential for safe geologic carbon sequestration. Time-lapse seismic surveys have the potential to effectively monitor fluid migration in the reservoir that causes geophysical property changes such as density, and P- and S-wave velocities. We introduce a novel method for quantitative estimation of seismic velocity changes using time-lapse seismic data. The method employs elastic sensitivity wavefields, which are the derivatives of elastic wavefield with respect to density, P- and S-wave velocities of a target region. We derive the elastic sensitivity equations from analytical differentiations of the elastic-wave equations with respect to seismic-wave velocities. The sensitivity equations are coupled with the wave equations in a way that elastic waves arriving in a target reservoir behave as a secondary source to sensitivity fields. We use a staggered-grid finite-difference scheme with perfectly-matched layers absorbing boundary conditions to simultaneously solve the elastic-wave equations and the elastic sensitivity equations. By elastic-wave sensitivities, a linear relationship between relative seismic velocity changes in the reservoir and time-lapse seismic data at receiver locations can be derived, which leads to an over-determined system of equations. We solve this system of equations using a least- square method for each receiver to obtain P- and S-wave velocity changes. We validate the method using both surface and VSP synthetic time-lapse seismic data for a multi-layered model and the elastic Marmousi model. Then we apply it to the time-lapse field VSP data acquired at the Aneth oil field in Utah. A total of 10.5K tons of CO2 was injected into the oil reservoir between the two VSP surveys for enhanced oil recovery. The synthetic and field data studies show that our new method can quantitatively estimate changes in seismic velocities within a reservoir due to CO2 injection/migration.

  13. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soliman, A; Hashemi, M; Safigholi, H

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation timesmore » and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T{sub 2}* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R{sup 2} = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R{sup 2}=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.« less

  14. Population growth rates of reef sharks with and without fishing on the great barrier reef: robust estimation with multiple models.

    PubMed

    Hisano, Mizue; Connolly, Sean R; Robbins, William D

    2011-01-01

    Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.

  15. Population Growth Rates of Reef Sharks with and without Fishing on the Great Barrier Reef: Robust Estimation with Multiple Models

    PubMed Central

    Hisano, Mizue; Connolly, Sean R.; Robbins, William D.

    2011-01-01

    Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402

  16. TOXNET: Toxicology Data Network

    MedlinePlus

    ... 4. Supporting Data for Carcinogenicity Expand II.B. Quantitative Estimate of Carcinogenic Risk from Oral Exposure II. ... of Confidence (Carcinogenicity, Oral Exposure) Expand II.C. Quantitative Estimate of Carcinogenic Risk from Inhalation Exposure II. ...

  17. Analysis of Ingredient Lists to Quantitatively Characterize ...

    EPA Pesticide Factsheets

    The EPA’s ExpoCast program is developing high throughput (HT) approaches to generate the needed exposure estimates to compare against HT bioactivity data generated from the US inter-agency Tox21 and the US EPA ToxCast programs. Assessing such exposures for the thousands of chemicals in consumer products requires data on product composition. This is a challenge since quantitative product composition data are rarely available. We developed methods to predict the weight fractions of chemicals in consumer products from weight fraction-ordered chemical ingredient lists, and curated a library of such lists from online manufacturer and retailer sites. The probabilistic model predicts weight fraction as a function of the total number of reported ingredients, the rank of the ingredient in the list, the minimum weight fraction for which ingredients were reported, and the total weight fraction of unreported ingredients. Weight fractions predicted by the model compared very well to available quantitative weight fraction data obtained from Material Safety Data Sheets for products with 3-8 ingredients. Lists were located from the online sources for 5148 products containing 8422 unique ingredient names. A total of 1100 of these names could be located in EPA’s HT chemical database (DSSTox), and linked to 864 unique Chemical Abstract Service Registration Numbers (392 of which were in the Tox21 chemical library). Weight fractions were estimated for these 864 CASRN. Using a

  18. A software package to improve image quality and isolation of objects of interest for quantitative stereology studies of rat hepatocarcinogenesis.

    PubMed

    Xu, Yihua; Pitot, Henry C

    2006-03-01

    In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.

  19. Evaluating variability and uncertainty separately in microbial quantitative risk assessment using two R packages.

    PubMed

    Pouillot, Régis; Delignette-Muller, Marie Laure

    2010-09-01

    Quantitative risk assessment has emerged as a valuable tool to enhance the scientific basis of regulatory decisions in the food safety domain. This article introduces the use of two new computing resources (R packages) specifically developed to help risk assessors in their projects. The first package, "fitdistrplus", gathers tools for choosing and fitting a parametric univariate distribution to a given dataset. The data may be continuous or discrete. Continuous data may be right-, left- or interval-censored as is frequently obtained with analytical methods, with the possibility of various censoring thresholds within the dataset. Bootstrap procedures then allow the assessor to evaluate and model the uncertainty around the parameters and to transfer this information into a quantitative risk assessment model. The second package, "mc2d", helps to build and study two dimensional (or second-order) Monte-Carlo simulations in which the estimation of variability and uncertainty in the risk estimates is separated. This package easily allows the transfer of separated variability and uncertainty along a chain of conditional mathematical and probabilistic models. The usefulness of these packages is illustrated through a risk assessment of hemolytic and uremic syndrome in children linked to the presence of Escherichia coli O157:H7 in ground beef. These R packages are freely available at the Comprehensive R Archive Network (cran.r-project.org). Copyright 2010 Elsevier B.V. All rights reserved.

  20. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  1. Effect of bone chip orientation on quantitative estimates of changes in bone mass using digital subtraction radiography.

    PubMed

    Mol, André; Dunn, Stanley M

    2003-06-01

    To assess the effect of the orientation of arbitrarily shaped bone chips on the correlation between radiographic estimates of bone loss and true mineral loss using digital subtraction radiography. Twenty arbitrarily shaped bone chips (dry weight 1-10 mg) were placed individually on the superior lingual aspect of the interdental alveolar bone of a dry dentate hemi-mandible. After acquiring the first baseline image, each chip was rotated 90 degrees and a second radiograph was captured. Follow-up images were created without the bone chips and after rotating the mandible 0, 1, 2, 4, and 6 degrees around a vertical axis. Aluminum step tablet intensities were used to normalize image intensities for each image pair. Follow-up images were registered and geometrically standardized using projective standardization. Bone chips were dry ashed and analyzed for calcium content using atomic absorption. No significant difference was found between the radiographic estimates of bone loss from the different bone chip orientations (Wilcoxon: P > 0.05). The correlation between the two series of estimates for all rotations was 0.93 (Spearman: P < 0.05). Linear regression analysis indicated that both correlates did not differ appreciably ( and ). It is concluded that the spatial orientation of arbitrarily shaped bone chips does not have a significant impact on quantitative estimates of changes in bone mass in digital subtraction radiography. These results were obtained in the presence of irreversible projection errors of up to six degrees and after application of projective standardization for image reconstruction and image registration.

  2. Genomic scan as a tool for assessing the genetic component of phenotypic variance in wild populations.

    PubMed

    Herrera, Carlos M

    2012-01-01

    Methods for estimating quantitative trait heritability in wild populations have been developed in recent years which take advantage of the increased availability of genetic markers to reconstruct pedigrees or estimate relatedness between individuals, but their application to real-world data is not exempt from difficulties. This chapter describes a recent marker-based technique which, by adopting a genomic scan approach and focusing on the relationship between phenotypes and genotypes at the individual level, avoids the problems inherent to marker-based estimators of relatedness. This method allows the quantification of the genetic component of phenotypic variance ("degree of genetic determination" or "heritability in the broad sense") in wild populations and is applicable whenever phenotypic trait values and multilocus data for a large number of genetic markers (e.g., amplified fragment length polymorphisms, AFLPs) are simultaneously available for a sample of individuals from the same population. The method proceeds by first identifying those markers whose variation across individuals is significantly correlated with individual phenotypic differences ("adaptive loci"). The proportion of phenotypic variance in the sample that is statistically accounted for by individual differences in adaptive loci is then estimated by fitting a linear model to the data, with trait value as the dependent variable and scores of adaptive loci as independent ones. The method can be easily extended to accommodate quantitative or qualitative information on biologically relevant features of the environment experienced by each sampled individual, in which case estimates of the environmental and genotype × environment components of phenotypic variance can also be obtained.

  3. Low Reynolds number wind tunnel measurements - The importance of being earnest

    NASA Technical Reports Server (NTRS)

    Mueller, Thomas J.; Batill, Stephen M.; Brendel, Michael; Perry, Mark L.; Bloch, Diane R.

    1986-01-01

    A method for obtaining two-dimensional aerodynamic force coefficients at low Reynolds numbers using a three-component external platform balance is presented. Regardless of method, however, the importance of understanding the possible influence of the test facility and instrumentation on the final results cannot be overstated. There is an uncertainty in the ability of the facility to simulate a two-dimensional flow environment due to the confinement effect of the wind tunnel and the method used to mount the airfoil. Additionally, the ability of the instrumentation to accurately measure forces and pressures has an associated uncertainty. This paper focuses on efforts taken to understand the errors introduced by the techniques and apparatus used at the University of Notre Dame, and, the importance of making an earnest estimate of the uncertainty. Although quantitative estimates of facility induced errors are difficult to obtain, the uncertainty in measured results can be handled in a straightforward manner and provide the experimentalist, and others, with a basis to evaluate experimental results.

  4. Epidemiological studies on radiation carcinogenesis in human populations following acute exposure: nuclear explosions and medical radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabrikant, J.I.

    1982-08-01

    The present review provides an understanding of our current knowledge of the carcinogenic effect of low-dose radiation in man, and surveys the epidemiological studies of human populations exposed to nuclear explosions and medical radiation. Discussion centers on the contributions of quantitative epidemiology to present knowledge, the reliability of the dose-incidence data, and those relevant epidemiological studies that provide the most useful information for risk estimation of cancer-induction in man. Reference is made to dose-incidence relationships from laboratory animal experiments where they may obtain for problems and difficulties in extrapolation from data obtained at high doses to low doses, and frommore » animal data to the human situation. The paper describes the methods of application of such epidemiological data for estimation of excess risk of radiation-induced cancer in exposed human populations, and discusses the strengths and limitations of epidemiology in guiding radiation protection philosophy and public health policy.« less

  5. An overview of particulate emissions from residential biomass combustion

    NASA Astrophysics Data System (ADS)

    Vicente, E. D.; Alves, C. A.

    2018-01-01

    Residential biomass burning has been pointed out as one of the largest sources of fine particles in the global troposphere with serious impacts on air quality, climate and human health. Quantitative estimations of the contribution of this source to the atmospheric particulate matter levels are hard to obtain, because emission factors vary greatly with wood type, combustion equipment and operating conditions. Updated information should improve not only regional and global biomass burning emission inventories, but also the input for atmospheric models. In this work, an extensive tabulation of particulate matter emission factors obtained worldwide is presented and critically evaluated. Existing quantifications and the suitability of specific organic markers to assign the input of residential biomass combustion to the ambient carbonaceous aerosol are also discussed. Based on these organic markers or other tracers, estimates of the contribution of this sector to observed particulate levels by receptor models for different regions around the world are compiled. Key areas requiring future research are highlighted and briefly discussed.

  6. Epidemiological studies on radiation carcinogenesis in human populations following acute exposure: nuclear explosions and medical radiation.

    PubMed Central

    Fabrikant, J. I.

    1981-01-01

    The present review provides an understanding of our current knowledge of the carcinogenic effect of low-dose radiation in man, and surveys the epidemiological studies of human populations exposed to nuclear explosions and medical radiation. Discussion centers on the contributions of quantitative epidemiology to present knowledge, the reliability of the dose-incidence data, and those relevant epidemiological studies that provide the most useful information for risk estimation of cancer induction in man. Reference is made to dose-incidence relationships from laboratory animal experiments where they may obtain, for problems and difficulties in extrapolation from data obtained at high doses to low doses, and from animal data to the human situation. The paper describes the methods of application of such epidemiological data for estimation of excess risk of radiation-induced cancer in exposed human populations and discusses the strengths and limitations of epidemiology in guiding radiation protection philosophy and public health policy. PMID:7043913

  7. Capillary waves' dynamics at the nanoscale

    NASA Astrophysics Data System (ADS)

    Delgado-Buscalioni, Rafael; Chacón, Enrique; Tarazona, Pedro

    2008-12-01

    We study the dynamics of thermally excited capillary waves (CW) at molecular scales, using molecular dynamics simulations of simple liquid slabs. The analysis is based on the Fourier modes of the liquid surface, constructed via the intrinsic sampling method (Chacón and Tarazona 2003 Phys. Rev. Lett. 91 166103). We obtain the time autocorrelation of the Fourier modes to get the frequency and damping rate Γd(q) of each mode, with wavenumber q. Continuum hydrodynamics predicts \\Gamma (q) \\propto q\\gamma (q) and thus provides a dynamic measure of the q-dependent surface tension, γd(q). The dynamical estimation is much more robust than the structural prediction based on the amplitude of the Fourier mode, γs(q). Using the optimal estimation of the intrinsic surface, we obtain quantitative agreement between the structural and dynamic pictures. Quite surprisingly, the hydrodynamic prediction for CW remains valid up to wavelengths of about four molecular diameters. Surface tension hydrodynamics break down at shorter scales, whereby a transition to a molecular diffusion regime is observed.

  8. Robustness of linear quadratic state feedback designs in the presence of system uncertainty. [applied to STOL autopilot design

    NASA Technical Reports Server (NTRS)

    Patel, R. V.; Toda, M.; Sridhar, B.

    1977-01-01

    In connection with difficulties concerning an accurate mathematical representation of a linear quadratic state feedback (LQSF) system, it is often necessary to investigate the robustness (stability) of an LQSF design in the presence of system uncertainty and obtain some quantitative measure of the perturbations which such a design can tolerate. A study is conducted concerning the problem of expressing the robustness property of an LQSF design quantitatively in terms of bounds on the perturbations (modeling errors or parameter variations) in the system matrices. Bounds are obtained for the general case of nonlinear, time-varying perturbations. It is pointed out that most of the presented results are readily applicable to practical situations for which a designer has estimates of the bounds on the system parameter perturbations. Relations are provided which help the designer to select appropriate weighting matrices in the quadratic performance index to attain a robust design. The developed results are employed in the design of an autopilot logic for the flare maneuver of the Augmentor Wing Jet STOL Research Aircraft.

  9. Asbestos exposure--quantitative assessment of risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, J.M.; Weill, H.

    Methods for deriving quantitative estimates of asbestos-associated health risks are reviewed and their numerous assumptions and uncertainties described. These methods involve extrapolation of risks observed at past relatively high asbestos concentration levels down to usually much lower concentration levels of interest today--in some cases, orders of magnitude lower. These models are used to calculate estimates of the potential risk to workers manufacturing asbestos products and to students enrolled in schools containing asbestos products. The potential risk to workers exposed for 40 yr to 0.5 fibers per milliliter (f/ml) of mixed asbestos fiber type (a permissible workplace exposure limit under considerationmore » by the Occupational Safety and Health Administration (OSHA) ) are estimated as 82 lifetime excess cancers per 10,000 exposed. The risk to students exposed to an average asbestos concentration of 0.001 f/ml of mixed asbestos fiber types for an average enrollment period of 6 school years is estimated as 5 lifetime excess cancers per one million exposed. If the school exposure is to chrysotile asbestos only, then the estimated risk is 1.5 lifetime excess cancers per million. Risks from other causes are presented for comparison; e.g., annual rates (per million) of 10 deaths from high school football, 14 from bicycling (10-14 yr of age), 5 to 20 for whooping cough vaccination. Decisions concerning asbestos products require participation of all parties involved and should only be made after a scientifically defensible estimate of the associated risk has been obtained. In many cases to date, such decisions have been made without adequate consideration of the level of risk or the cost-effectiveness of attempts to lower the potential risk. 73 references.« less

  10. Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits

    PubMed Central

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L.

    2013-01-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays. PMID:23737753

  11. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    PubMed

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  12. SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jen, M; Yan, F; Tseng, Y

    2015-06-15

    Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtractionmore » of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.« less

  13. Quantitative Analysis of the Cervical Texture by Ultrasound and Correlation with Gestational Age.

    PubMed

    Baños, Núria; Perez-Moreno, Alvaro; Migliorelli, Federico; Triginer, Laura; Cobo, Teresa; Bonet-Carne, Elisenda; Gratacos, Eduard; Palacio, Montse

    2017-01-01

    Quantitative texture analysis has been proposed to extract robust features from the ultrasound image to detect subtle changes in the textures of the images. The aim of this study was to evaluate the feasibility of quantitative cervical texture analysis to assess cervical tissue changes throughout pregnancy. This was a cross-sectional study including singleton pregnancies between 20.0 and 41.6 weeks of gestation from women who delivered at term. Cervical length was measured, and a selected region of interest in the cervix was delineated. A model to predict gestational age based on features extracted from cervical images was developed following three steps: data splitting, feature transformation, and regression model computation. Seven hundred images, 30 per gestational week, were included for analysis. There was a strong correlation between the gestational age at which the images were obtained and the estimated gestational age by quantitative analysis of the cervical texture (R = 0.88). This study provides evidence that quantitative analysis of cervical texture can extract features from cervical ultrasound images which correlate with gestational age. Further research is needed to evaluate its applicability as a biomarker of the risk of spontaneous preterm birth, as well as its role in cervical assessment in other clinical situations in which cervical evaluation might be relevant. © 2016 S. Karger AG, Basel.

  14. Multiparametric evaluation of hindlimb ischemia using time-series indocyanine green fluorescence imaging.

    PubMed

    Guang, Huizhi; Cai, Chuangjian; Zuo, Simin; Cai, Wenjuan; Zhang, Jiulou; Luo, Jianwen

    2017-03-01

    Peripheral arterial disease (PAD) can further cause lower limb ischemia. Quantitative evaluation of the vascular perfusion in the ischemic limb contributes to diagnosis of PAD and preclinical development of new drug. In vivo time-series indocyanine green (ICG) fluorescence imaging can noninvasively monitor blood flow and has a deep tissue penetration. The perfusion rate estimated from the time-series ICG images is not enough for the evaluation of hindlimb ischemia. The information relevant to the vascular density is also important, because angiogenesis is an essential mechanism for post-ischemic recovery. In this paper, a multiparametric evaluation method is proposed for simultaneous estimation of multiple vascular perfusion parameters, including not only the perfusion rate but also the vascular perfusion density and the time-varying ICG concentration in veins. The target method is based on a mathematical model of ICG pharmacokinetics in the mouse hindlimb. The regression analysis performed on the time-series ICG images obtained from a dynamic reflectance fluorescence imaging system. The results demonstrate that the estimated multiple parameters are effective to quantitatively evaluate the vascular perfusion and distinguish hypo-perfused tissues from well-perfused tissues in the mouse hindlimb. The proposed multiparametric evaluation method could be useful for PAD diagnosis. The estimated perfusion rate and vascular perfusion density maps (left) and the time-varying ICG concentration in veins of the ankle region (right) of the normal and ischemic hindlimbs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Company-level, semi-quantitative assessment of occupational styrene exposure when individual data are not available.

    PubMed

    Kolstad, Henrik A; Sønderskov, Jette; Burstyn, Igor

    2005-03-01

    In epidemiological research, self-reported information about determinants and levels of occupational exposures is difficult to obtain, especially if the disease under study has a high mortality rate or follow-up has exceeded several years. In this paper, we present a semi-quantitative exposure assessment strategy for nested case-control studies of styrene exposure among workers of the Danish reinforced plastics industry when no information on job title, task or other indicators of individual exposure were readily available from cases and controls. The strategy takes advantage of the variability in styrene exposure level and styrene exposure probability across companies. The study comprised 1522 cases of selected malignancies and neurodegenerative diseases and controls employed in 230 reinforced plastics companies and other related industries. Between 1960 and 1996, 3057 measurements of styrene exposure level obtained from 191 companies, were identified. Mixed effects models were used to estimate expected styrene exposure levels by production characteristics for all companies. Styrene exposure probability within each company was estimated for all but three cases and controls from the fraction of laminators, which was reported by a sample of 945 living colleagues of the cases and controls and by employers and dealers of plastic raw materials. The estimates were validated from a subset of 427 living cases and controls that reported their own work as laminators in the industry. We computed styrene exposure scores that integrated estimated styrene exposure level and styrene exposure probability. Product (boats), process (hand and spray lamination) and calendar year period were the major determinants of styrene exposure level. Within-company styrene exposure variability increased by calendar year and was accounted for when computing the styrene exposure scores. Exposure probability estimates based on colleagues' reports showed the highest predictive values in the validation test, which also indicated that up to 67% of the workers were correctly classified into a styrene-exposed job. Styrene exposure scores declined about 10-fold from the 1960s-1990s. This exposure assessment approach may be justified in other industries, and especially in industries dominated by small companies with simple exposure conditions.

  16. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  17. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches.

    PubMed

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations.

  18. Estimation of soil hydraulic properties with microwave techniques

    NASA Technical Reports Server (NTRS)

    Oneill, P. E.; Gurney, R. J.; Camillo, P. J.

    1985-01-01

    Useful quantitative information about soil properties may be obtained by calibrating energy and moisture balance models with remotely sensed data. A soil physics model solves heat and moisture flux equations in the soil profile and is driven by the surface energy balance. Model generated surface temperature and soil moisture and temperature profiles are then used in a microwave emission model to predict the soil brightness temperature. The model hydraulic parameters are varied until the predicted temperatures agree with the remotely sensed values. This method is used to estimate values for saturated hydraulic conductivity, saturated matrix potential, and a soil texture parameter. The conductivity agreed well with a value measured with an infiltration ring and the other parameters agreed with values in the literature.

  19. A method for estimating 2D Wrinkle Ridge Strain from application of fault displacement scaling to the Yakima Folds, Washington

    NASA Astrophysics Data System (ADS)

    Mège, Daniel; Reidel, Stephen P.

    The Yakima folds on the central Columbia Plateau are a succession of thrusted anticlines thought to be analogs of planetary wrinkle ridges. They provide a unique opportunity to understand wrinkle ridge structure. Field data and length-displacement scaling are used to demonstrate a method for estimating two-dimensional horizontal contractional strain at wrinkle ridges. Strain is given as a function of ridge length, and depends on other parameters that can be inferred from the Yakima folds and fault population displacement studies. Because ridge length can be readily obtained from orbital imagery, the method can be applied to any wrinkle ridge population, and helps constrain quantitative tectonic models on other planets.

  20. Heavy baryons in the large N c limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albertus, C.; Ruiz Arriola, Enrique; Fernando, Ishara P.

    It is shown that in the large N c limit heavy baryon masses can be estimated quantitatively in a 1/N c expansion using the Hartree approximation. The results are compared with available lattice calculations for different values of the ratio between the square root of the string tension and the heavy quark mass tension independent of N c. Using a potential adjusted to agree with the one obtained in lattice QCD, a variational analysis of the ground state spin averaged baryon mass is performed using Gaussian Hartree wave functions. Relativistic corrections through the quark kinetic energy are included. Lastly, themore » results provide good estimates for the first sub-leading in 1/N c corrections.« less

  1. Heavy baryons in the large N c limit

    DOE PAGES

    Albertus, C.; Ruiz Arriola, Enrique; Fernando, Ishara P.; ...

    2015-09-16

    It is shown that in the large N c limit heavy baryon masses can be estimated quantitatively in a 1/N c expansion using the Hartree approximation. The results are compared with available lattice calculations for different values of the ratio between the square root of the string tension and the heavy quark mass tension independent of N c. Using a potential adjusted to agree with the one obtained in lattice QCD, a variational analysis of the ground state spin averaged baryon mass is performed using Gaussian Hartree wave functions. Relativistic corrections through the quark kinetic energy are included. Lastly, themore » results provide good estimates for the first sub-leading in 1/N c corrections.« less

  2. Crop identification technology assessment for remote sensing (CITARS). Volume 10: Interpretation of results

    NASA Technical Reports Server (NTRS)

    Bizzell, R. M.; Feiveson, A. H.; Hall, F. G.; Bauer, M. E.; Davis, B. J.; Malila, W. A.; Rice, D. P.

    1975-01-01

    The CITARS was an experiment designed to quantitatively evaluate crop identification performance for corn and soybeans in various environments using a well-defined set of automatic data processing (ADP) techniques. Each technique was applied to data acquired to recognize and estimate proportions of corn and soybeans. The CITARS documentation summarizes, interprets, and discusses the crop identification performances obtained using (1) different ADP procedures; (2) a linear versus a quadratic classifier; (3) prior probability information derived from historic data; (4) local versus nonlocal recognition training statistics and the associated use of preprocessing; (5) multitemporal data; (6) classification bias and mixed pixels in proportion estimation; and (7) data with differnt site characteristics, including crop, soil, atmospheric effects, and stages of crop maturity.

  3. On the scaling of the distribution of daily price fluctuations in the Mexican financial market index

    NASA Astrophysics Data System (ADS)

    Alfonso, Léster; Mansilla, Ricardo; Terrero-Escalante, César A.

    2012-05-01

    In this paper, a statistical analysis of log-return fluctuations of the IPC, the Mexican Stock Market Index is presented. A sample of daily data covering the period from 04/09/2000-04/09/2010 was analyzed, and fitted to different distributions. Tests of the goodness of fit were performed in order to quantitatively asses the quality of the estimation. Special attention was paid to the impact of the size of the sample on the estimated decay of the distributions tail. In this study a forceful rejection of normality was obtained. On the other hand, the null hypothesis that the log-fluctuations are fitted to a α-stable Lévy distribution cannot be rejected at the 5% significance level.

  4. A systematic review of quantitative burn wound microbiology in the management of burns patients.

    PubMed

    Halstead, Fenella D; Lee, Kwang Chear; Kwei, Johnny; Dretzke, Janine; Oppenheim, Beryl A; Moiemen, Naiem S

    2018-02-01

    The early diagnosis of infection or sepsis in burns are important for patient care. Globally, a large number of burn centres advocate quantitative cultures of wound biopsies for patient management, since there is assumed to be a direct link between the bioburden of a burn wound and the risk of microbial invasion. Given the conflicting study findings in this area, a systematic review was warranted. Bibliographic databases were searched with no language restrictions to August 2015. Study selection, data extraction and risk of bias assessment were performed in duplicate using pre-defined criteria. Substantial heterogeneity precluded quantitative synthesis, and findings were described narratively, sub-grouped by clinical question. Twenty six laboratory and/or clinical studies were included. Substantial heterogeneity hampered comparisons across studies and interpretation of findings. Limited evidence suggests that (i) more than one quantitative microbiology sample is required to obtain reliable estimates of bacterial load; (ii) biopsies are more sensitive than swabs in diagnosing or predicting sepsis; (iii) high bacterial loads may predict worse clinical outcomes, and (iv) both quantitative and semi-quantitative culture reports need to be interpreted with caution and in the context of other clinical risk factors. The evidence base for the utility and reliability of quantitative microbiology for diagnosing or predicting clinical outcomes in burns patients is limited and often poorly reported. Consequently future research is warranted. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  5. On sweat analysis for quantitative estimation of dehydration during physical exercise.

    PubMed

    Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M

    2015-08-01

    Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.

  6. Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.

    PubMed

    Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi

    2018-05-28

    Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.

  7. MFI ratio estimation of ZAP-70 in B-CLL by flow cytometry can be improved by considering the isotype-matched antibody signal.

    PubMed

    Marquez, M-E; Deglesne, P-A; Suarez, G; Romano, E

    2011-04-01

    The IgV(H) mutational status of B-cell chronic lymphocytic leukemia (B-CLL) is of prognostic value. Expression of ZAP-70 in B-CLL is a surrogate marker for IgV(H) unmutated (UM). As determination of IgV(H) mutational status involves a methodology currently unavailable for most clinical laboratories, it is important to have available a reliable technique for ZAP-70 estimation in B-CLL. Flow cytometry (FC) is a convenient technique for this purpose. However, there is still no adequate way for data analysis, which would prevent the assignment of false positive or negative expression. We have modified the currently most accepted technique, which uses the ratio of the mean fluorescent index (MFI) of B-CLL to T cells. The MFI for parallel antibody isotype staining is subtracted from the ZAP-70 MFI of both B-CLL and T cells. We validated this technique comparing the results obtained for ZAP-70 expression by FC with those obtained with quantitative PCR for the same patients. We applied the technique in a series of 53 patients. With this modification, a better correlation between ZAP-70 expression and IgV(H) UM was obtained. Thus, the MFI ratio B-CLL/T cell corrected by isotype is a reliable analysis technique to estimate ZAP-70 expression in B-CLL. © 2010 Blackwell Publishing Ltd.

  8. Subsurface water parameters: optimization approach to their determination from remotely sensed water color data.

    PubMed

    Jain, S C; Miller, J R

    1976-04-01

    A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.

  9. Legionella in water samples: how can you interpret the results obtained by quantitative PCR?

    PubMed

    Ditommaso, Savina; Ricciardi, Elisa; Giacomuzzi, Monica; Arauco Rivera, Susan R; Zotti, Carla M

    2015-02-01

    Evaluation of the potential risk associated with Legionella has traditionally been determined from culture-based methods. Quantitative polymerase chain reaction (qPCR) is an alternative tool that offers rapid, sensitive and specific detection of Legionella in environmental water samples. In this study we compare the results obtained by conventional qPCR (iQ-Check™ Quanti Legionella spp.; Bio-Rad) and by culture method on artificial samples prepared in Page's saline by addiction of Legionella pneumophila serogroup 1 (ATCC 33152) and we analyse the selective quantification of viable Legionella cells by the qPCR-PMA method. The amount of Legionella DNA (GU) determined by qPCR was 28-fold higher than the load detected by culture (CFU). Applying the qPCR combined with PMA treatment we obtained a reduction of 98.5% of the qPCR signal from dead cells. We observed a dissimilarity in the ability of PMA to suppress the PCR signal in samples with different amounts of bacteria: the effective elimination of detection signals by PMA depended on the concentration of GU and increasing amounts of cells resulted in higher values of reduction. Using the results from this study we created an algorithm to facilitate the interpretation of viable cell level estimation with qPCR-PMA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Image-derived arterial input function for quantitative fluorescence imaging of receptor-drug binding in vivo

    PubMed Central

    Elliott, Jonathan T.; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason R.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.

    2017-01-01

    Receptor concentration imaging (RCI) with targeted-untargeted optical dye pairs has enabled in vivo immunohistochemistry analysis in preclinical subcutaneous tumors. Successful application of RCI to fluorescence guided resection (FGR), so that quantitative molecular imaging of tumor-specific receptors could be performed in situ, would have a high impact. However, assumptions of pharmacokinetics, permeability and retention, as well as the lack of a suitable reference region limit the potential for RCI in human neurosurgery. In this study, an arterial input graphic analysis (AIGA) method is presented which is enabled by independent component analysis (ICA). The percent difference in arterial concentration between the image-derived arterial input function (AIFICA) and that obtained by an invasive method (ICACAR) was 2.0 ± 2.7% during the first hour of circulation of a targeted-untargeted dye pair in mice. Estimates of distribution volume and receptor concentration in tumor bearing mice (n = 5) recovered using the AIGA technique did not differ significantly from values obtained using invasive AIF measurements (p=0.12). The AIGA method, enabled by the subject-specific AIFICA, was also applied in a rat orthotopic model of U-251 glioblastoma to obtain the first reported receptor concentration and distribution volume maps during open craniotomy. PMID:26349671

  11. An approach to standardization of urine sediment analysis via suggestion of a common manual protocol.

    PubMed

    Ko, Dae-Hyun; Ji, Misuk; Kim, Sollip; Cho, Eun-Jung; Lee, Woochang; Yun, Yeo-Min; Chun, Sail; Min, Won-Ki

    2016-01-01

    The results of urine sediment analysis have been reported semiquantitatively. However, as recent guidelines recommend quantitative reporting of urine sediment, and with the development of automated urine sediment analyzers, there is an increasing need for quantitative analysis of urine sediment. Here, we developed a protocol for urine sediment analysis and quantified the results. Based on questionnaires, various reports, guidelines, and experimental results, we developed a protocol for urine sediment analysis. The results of this new protocol were compared with those obtained with a standardized chamber and an automated sediment analyzer. Reference intervals were also estimated using new protocol. We developed a protocol with centrifugation at 400 g for 5 min, with the average concentration factor of 30. The correlation between quantitative results of urine sediment analysis, the standardized chamber, and the automated sediment analyzer were generally good. The conversion factor derived from the new protocol showed a better fit with the results of manual count than the default conversion factor in the automated sediment analyzer. We developed a protocol for manual urine sediment analysis to quantitatively report the results. This protocol may provide a mean for standardization of urine sediment analysis.

  12. Remote Determination of Auroral Energy Characteristics During Substorm Activity

    NASA Technical Reports Server (NTRS)

    Germany, G. A.; Parks, G. K.; Brittnacher, M. J.; Cumnock, J.; Lummerzheim, D.; Spann, J. F., Jr.

    1997-01-01

    Ultraviolet auroral images from the Ultraviolet Imager onboard the POLAR satellite can be used as quantitative remote diagnostics of the auroral regions, yielding estimates of incident energy characteristics, compositional changes, and other higher order data products. In particular, images of long and short wavelength N2 Lyman-Birge-Hopfield (LBH) emissions can be modeled to obtain functions of energy flux and average energy that are basically insensitive to changes in seasonal and solar activity changes. This technique is used in this study to estimate incident electron energy flux and average energy during substorm activity occurring on May 19, 1996. This event was simultaneously observed by WIND, GEOTAIL, INTERBALL, DMSP and NOAA spacecraft as well as by POLAR. Here incident energy estimates derived from Ultraviolet Imager (UVI) are compared with in situ measurements of the same parameters from an overflight by the DMSP F12 satellite coincident with the UVI image times.

  13. Determination of mean rainfall from the Special Sensor Microwave/Imager (SSM/I) using a mixed lognormal distribution

    NASA Technical Reports Server (NTRS)

    Berg, Wesley; Chase, Robert

    1992-01-01

    Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.

  14. RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.

    PubMed

    Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z

    2017-04-01

    We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Assessment of Thematic Mapper Band-to-band Registration by the Block Correlation Method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1984-01-01

    The design of the Thematic Mapper (TM) multispectral radiometer makes it susceptible to band-to-band misregistration. To estimate band-to-band misregistration a block correlation method is employed. This method is chosen over other possible techniques (band differencing and flickering) because quantitative results are produced. The method correlates rectangular blocks of pixels from one band against blocks centered on identical pixels from a second band. The block pairs are shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient for each shift position is computed. The displacement corresponding to the maximum correlation is taken as the best estimate of registration error for each block pair. Subpixel shifts are estimated by a bi-quadratic interpolation of the correlation values surrounding the maximum correlation. To obtain statistical summaries for each band combination post processing of the block correlation results performed. The method results in estimates of registration error that are consistent with expectations.

  16. A Comprehensive Estimation of the Economic Effects of Meteorological Services Based on the Input-Output Method

    PubMed Central

    Wu, Xianhua; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian

    2014-01-01

    Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries. PMID:24578666

  17. Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model

    PubMed Central

    Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz

    2014-01-01

    Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915

  18. A comprehensive estimation of the economic effects of meteorological services based on the input-output method.

    PubMed

    Wu, Xianhua; Wei, Guo; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian

    2014-01-01

    Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27-1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30-1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.

  19. Quantification of idiopathic pulmonary fibrosis using computed tomography and histology.

    PubMed

    Coxson, H O; Hogg, J C; Mayo, J R; Behzad, H; Whittall, K P; Schwartz, D A; Hartley, P G; Galvin, J R; Wilson, J S; Hunninghake, G W

    1997-05-01

    We used computed tomography (CT) and histologic analysis to quantify lung structure in idiopathic pulmonary fibrosis (IPF). CT scans were obtained from IPF and control patients and lung volumes were estimated from measurements of voxel size, and X-ray attenuation values of each voxel. Quantitative estimates of lung structure were obtained from biopsies obtained from diseased and normal CT regions using stereologic methods. CT density was used to calculate the proportion of tissue and air, and this value was used to correct the biopsy specimens to the level of inflation during the CT scan. The data show that IPF is associated with a reduction in airspace volume with no change in tissue volume or weight compared with control lungs. Lung surface area decreased two-thirds (p < 0.001) and mean parenchymal thickness increased tenfold (p < 0.001). An exudate of fluid and cells was present in the airspace of the diseased lung regions and the number of inflammatory cells, collagen, and proteoglycans was increased per 100 g of tissue in IPF. We conclude that IPF reorganized lung tissue content causing a loss of airspace and surface area without increasing the total lung tissue.

  20. Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF

    NASA Technical Reports Server (NTRS)

    Hou, Arthur; Zhang, Sara; Reale, Oreste

    2002-01-01

    Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.

  1. Quantifying and predicting Drosophila larvae crawling phenotypes

    NASA Astrophysics Data System (ADS)

    Günther, Maximilian N.; Nettesheim, Guilherme; Shubeita, George T.

    2016-06-01

    The fruit fly Drosophila melanogaster is a widely used model for cell biology, development, disease, and neuroscience. The fly’s power as a genetic model for disease and neuroscience can be augmented by a quantitative description of its behavior. Here we show that we can accurately account for the complex and unique crawling patterns exhibited by individual Drosophila larvae using a small set of four parameters obtained from the trajectories of a few crawling larvae. The values of these parameters change for larvae from different genetic mutants, as we demonstrate for fly models of Alzheimer’s disease and the Fragile X syndrome, allowing applications such as genetic or drug screens. Using the quantitative model of larval crawling developed here we use the mutant-specific parameters to robustly simulate larval crawling, which allows estimating the feasibility of laborious experimental assays and aids in their design.

  2. Reducing misfocus-related motion artefacts in laser speckle contrast imaging.

    PubMed

    Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer

    2015-01-01

    Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.

  3. Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.

    PubMed

    Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse

    2018-05-01

    Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.

  4. Smile line assessment comparing quantitative measurement and visual estimation.

    PubMed

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  5. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  6. Conclusions on measurement uncertainty in microbiology.

    PubMed

    Forster, Lynne I

    2009-01-01

    Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were <20, estimated uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.

  7. Quantitative assessment of direct and indirect landslide risk along transportation lines in southern India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2010-06-01

    A quantitative approach for landslide risk assessment along transportation lines is presented and applied to a road and a railway alignment in the Nilgiri hills in southern India. The method allows estimating direct risk affecting the alignments, vehicles and people, and indirect risk resulting from the disruption of economic activities. The data required for the risk estimation were obtained from historical records. A total of 901 landslides were catalogued initiating from cut slopes along the railway and road alignment. The landslides were grouped into three magnitude classes based on the landslide type, volume, scar depth, run-out distance, etc and their probability of occurrence was obtained using frequency-volume distribution. Hazard, for a given return period, expressed as the number of landslides of a given magnitude class per kilometre of cut slopes, was obtained using Gumbel distribution and probability of landslide magnitude. In total 18 specific hazard scenarios were generated using the three magnitude classes and six return periods (1, 3, 5, 15, 25, and 50 years). The assessment of the vulnerability of the road and railway line was based on damage records whereas the vulnerability of different types of vehicles and people was subjectively assessed based on limited historic incidents. Direct specific loss for the alignments (railway line and road), vehicles (train, bus, lorry, car and motorbike) was expressed in monetary value (US), and direct specific loss of life of commuters was expressed in annual probability of death. Indirect specific loss (US) derived from the traffic interruption was evaluated considering alternative driving routes, and includes losses resulting from additional fuel consumption, additional travel cost, loss of income to the local business, and loss of revenue to the railway department. The results indicate that the total loss, including both direct and indirect loss, from 1 to 50 years return period, varies from US 90 840 to US 779 500 and the average annual total loss was estimated as US 35 000. The annual probability of a person most at risk travelling in a bus, lorry, car, motorbike and train is less than 10-4/annum in all the time periods considered. The detailed estimation of direct and indirect risk will facilitate developing landslide risk mitigation and management strategies for transportation lines in the study area.

  8. Fatigue Management Strategies for the Stratospheric Observatory for Infrared Astronomy

    NASA Technical Reports Server (NTRS)

    Bendrick, Gregg

    2012-01-01

    Operation of the Stratospheric Observatory for Infrared Astronomy entails a great deal of night-time work, with the potential for both acute and chronic sleep loss, as well as circadian rhythm dysynchrony. Such fatigue can result in performance decrements, with an increased risk of operator error. The NASA Dryden Flight Research Center manages this fatigue risk by means of a layered approach, to include: 1) Education and Training 2) Work Schedule Scoring 3) Obtained Sleep Metrics 4) Workplace and Operational Mitigations and 5) Incident or Accident Investigation. Specifically, quantitative estimation of the work schedule score, as well as the obtained sleep metric, allows Supervisors and Managers to better manage the risk of fatigue within the context of mission requirements.

  9. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  10. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    PubMed

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  11. A non-parametric automatic blending methodology to estimate rainfall fields from rain gauge and radar data

    NASA Astrophysics Data System (ADS)

    Velasco-Forero, Carlos A.; Sempere-Torres, Daniel; Cassiraga, Eduardo F.; Jaime Gómez-Hernández, J.

    2009-07-01

    Quantitative estimation of rainfall fields has been a crucial objective from early studies of the hydrological applications of weather radar. Previous studies have suggested that flow estimations are improved when radar and rain gauge data are combined to estimate input rainfall fields. This paper reports new research carried out in this field. Classical approaches for the selection and fitting of a theoretical correlogram (or semivariogram) model (needed to apply geostatistical estimators) are avoided in this study. Instead, a non-parametric technique based on FFT is used to obtain two-dimensional positive-definite correlograms directly from radar observations, dealing with both the natural anisotropy and the temporal variation of the spatial structure of the rainfall in the estimated fields. Because these correlation maps can be automatically obtained at each time step of a given rainfall event, this technique might easily be used in operational (real-time) applications. This paper describes the development of the non-parametric estimator exploiting the advantages of FFT for the automatic computation of correlograms and provides examples of its application on a case study using six rainfall events. This methodology is applied to three different alternatives to incorporate the radar information (as a secondary variable), and a comparison of performances is provided. In particular, their ability to reproduce in estimated rainfall fields (i) the rain gauge observations (in a cross-validation analysis) and (ii) the spatial patterns of radar fields are analyzed. Results seem to indicate that the methodology of kriging with external drift [KED], in combination with the technique of automatically computing 2-D spatial correlograms, provides merged rainfall fields with good agreement with rain gauges and with the most accurate approach to the spatial tendencies observed in the radar rainfall fields, when compared with other alternatives analyzed.

  12. Errors in retarding potential analyzers caused by nonuniformity of the grid-plane potential.

    NASA Technical Reports Server (NTRS)

    Hanson, W. B.; Frame, D. R.; Midgley, J. E.

    1972-01-01

    One aspect of the degradation in performance of retarding potential analyzers caused by potential depressions in the retarding grid is quantitatively estimated from laboratory measurements and theoretical calculations. A simple expression is obtained that permits the use of laboratory measurements of grid properties to make first-order corrections to flight data. Systematic positive errors in ion temperature of approximately 16% for the Ogo 4 instrument and 3% for the Ogo 6 instrument are deduced. The effects of the transverse electric fields arising from the grid potential depressions are not treated.

  13. Interatomic potential at small internuclear distances. A simple formula for the screening constant

    NASA Astrophysics Data System (ADS)

    Zinoviev, A. N.

    2017-09-01

    A simple formula for estimating the screening constant has been proposed. This formula fits well experimental data on the interaction potentials. Quantitative description of the experiment for the effect of electronic screening on the nuclear synthesis reaction cross-section for the D+-D system has been obtained. A conclusion has been made that the differences between the measured cross-sections and their theoretically predicted values, which take place in more complicated cases nuclear synthesis reactions, are not caused by uncertainties in the knowledge of potentials.

  14. Remote measurements of the atmosphere using Raman scattering.

    NASA Technical Reports Server (NTRS)

    Melfi, S. H.

    1972-01-01

    Raman optical radar measurements of the atmosphere demonstrate that the technique may be used to obtain quantitative measurements of the spatial distribution of individual atmospheric molecular trace constituents (in particular water vapor) and of the major constituents. It is shown that monitoring Raman signals from atmospheric nitrogen aids in interpreting elastic scattering measurements by eliminating attenuation effects. In general, the experimental results show good agreement with independent meteorological measurements. Finally, experimental data are utilized to estimate the Raman backscatter cross section for water vapor excited at 3471.5 A.

  15. Evaluation of the performance of microprocessor-based colorimeter

    PubMed Central

    Randhawa, S. S.; Gupta, R. C.; Bhandari, A. K.; Malhotra, P. S.

    1992-01-01

    Colorimetric estimations have an important role in quantitative studies. An inexpensive and portable microprocessor-based colorimeter developed by the authors is described in this paper. The colorimeter uses a light emitting diode as the light source; a pinphotodiode as the detector and an 8085A microprocessor. Blood urea, glucose, total protein, albumin and bilirubin from patient blood samples were analysed with the instrument and results obtained were compared with assays of the same blood using a Spectronic 21. A good correlation was found between the results from the two instruments. PMID:18924952

  16. Evaluation of the performance of microprocessor-based colorimeter.

    PubMed

    Randhawa, S S; Gupta, R C; Bhandari, A K; Malhotra, P S

    1992-01-01

    Colorimetric estimations have an important role in quantitative studies. An inexpensive and portable microprocessor-based colorimeter developed by the authors is described in this paper. The colorimeter uses a light emitting diode as the light source; a pinphotodiode as the detector and an 8085A microprocessor. Blood urea, glucose, total protein, albumin and bilirubin from patient blood samples were analysed with the instrument and results obtained were compared with assays of the same blood using a Spectronic 21. A good correlation was found between the results from the two instruments.

  17. Protein retention assessment of four levels of poultry by-product substitution of fishmeal in rainbow trout (Oncorhynchus mykiss) diets using stable isotopes of nitrogen (δ15N) as natural tracers.

    PubMed

    Badillo, Daniel; Herzka, Sharon Z; Viana, Maria Teresa

    2014-01-01

    This is second part from an experiment where the nitrogen retention of poultry by-product meal (PBM) compared to fishmeal (FM) was evaluated using traditional indices. Here a quantitative method using stable isotope ratios of nitrogen (δ(15)N values) as natural tracers of nitrogen incorporation into fish biomass is assessed. Juvenile rainbow trout (Oncorhynchus mykiss) were fed for 80 days on isotopically distinct diets in which 0, 33, 66 and 100% of FM as main protein source was replaced by PBM. The diets were isonitrogenous, isolipidic and similar in gross energy content. Fish in all treatments reached isotopic equilibrium by the end of the experiment. Two-source isotope mixing models that incorporated the isotopic composition of FM and PBM as well as that of formulated feeds, empirically derived trophic discrimination factors and the isotopic composition of fish that had reached isotopic equilibrium to the diets were used to obtain a quantitative estimate of the retention of each source of nitrogen. Fish fed the diets with 33 and 66% replacement of FM by PBM retained poultry by-product meal roughly in proportion to its level of inclusion in the diets, whereas no differences were detected in the protein efficiency ratio. Coupled with the similar biomass gain of fishes fed the different diets, our results support the inclusion of PBM as replacement for fishmeal in aquaculture feeds. A re-feeding experiment in which all fish were fed a diet of 100% FM for 28 days indicated isotopic turnover occurred very fast, providing further support for the potential of isotopic ratios as tracers of the retention of specific protein sources into fish tissues. Stable isotope analysis is a useful tool for studies that seek to obtain quantitative estimates of the retention of different protein sources.

  18. Protein Retention Assessment of Four Levels of Poultry By-Product Substitution of Fishmeal in Rainbow Trout (Oncorhynchus mykiss) Diets Using Stable Isotopes of Nitrogen (δ15N) as Natural Tracers

    PubMed Central

    Badillo, Daniel; Herzka, Sharon Z.; Viana, Maria Teresa

    2014-01-01

    This is second part from an experiment where the nitrogen retention of poultry by-product meal (PBM) compared to fishmeal (FM) was evaluated using traditional indices. Here a quantitative method using stable isotope ratios of nitrogen (δ15N values) as natural tracers of nitrogen incorporation into fish biomass is assessed. Juvenile rainbow trout (Oncorhynchus mykiss) were fed for 80 days on isotopically distinct diets in which 0, 33, 66 and 100% of FM as main protein source was replaced by PBM. The diets were isonitrogenous, isolipidic and similar in gross energy content. Fish in all treatments reached isotopic equilibrium by the end of the experiment. Two-source isotope mixing models that incorporated the isotopic composition of FM and PBM as well as that of formulated feeds, empirically derived trophic discrimination factors and the isotopic composition of fish that had reached isotopic equilibrium to the diets were used to obtain a quantitative estimate of the retention of each source of nitrogen. Fish fed the diets with 33 and 66% replacement of FM by PBM retained poultry by-product meal roughly in proportion to its level of inclusion in the diets, whereas no differences were detected in the protein efficiency ratio. Coupled with the similar biomass gain of fishes fed the different diets, our results support the inclusion of PBM as replacement for fishmeal in aquaculture feeds. A re-feeding experiment in which all fish were fed a diet of 100% FM for 28 days indicated isotopic turnover occurred very fast, providing further support for the potential of isotopic ratios as tracers of the retention of specific protein sources into fish tissues. Stable isotope analysis is a useful tool for studies that seek to obtain quantitative estimates of the retention of different protein sources. PMID:25226392

  19. Using Dynamic Contrast Enhanced MRI to Quantitatively Characterize Maternal Vascular Organization in the Primate Placenta

    PubMed Central

    Frias, A.E.; Schabel, M.C.; Roberts, V.H.J.; Tudorica, A.; Grigsby, P.L.; Oh, K.Y.; Kroenke, C. D.

    2015-01-01

    Purpose The maternal microvasculature of the primate placenta is organized into 10-20 perfusion domains that are functionally optimized to facilitate nutrient exchange to support fetal growth. This study describes a dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) method for identifying vascular domains, and quantifying maternal blood flow in them. Methods A rhesus macaque on the 133rd day of pregnancy (G133, term=165 days) underwent Doppler ultrasound (US) procedures, DCE-MRI, and Cesarean-section delivery. Serial T1-weighted images acquired throughout intravenous injection of a contrast reagent (CR) bolus were analyzed to obtain CR arrival time maps of the placenta. Results Watershed segmentation of the arrival time map identified 16 perfusion domains. The number and location of these domains corresponded to anatomical cotyledonary units observed following delivery. Analysis of the CR wave front through each perfusion domain enabled determination of volumetric flow, which ranged from 9.03 to 44.9 mL/sec (25.2 ± 10.3 mL/sec). These estimates are supported by Doppler US results. Conclusions The DCE-MRI analysis described here provides quantitative estimates of the number of maternal perfusion domains in a primate placenta, and estimates flow within each domain. Anticipated extensions of this technique are to the study placental function in nonhuman primate models of obstetric complications. PMID:24753177

  20. Simultaneous Estimation of Withaferin A and Z-Guggulsterone in Marketed Formulation by RP-HPLC.

    PubMed

    Agrawal, Poonam; Vegda, Rashmi; Laddha, Kirti

    2015-07-01

    A simple, rapid, precise and accurate high-performance liquid chromatography (HPLC) method was developed for simultaneous estimation of withaferin A and Z-guggulsterone in a polyherbal formulation containing Withania somnifera and Commiphora wightii. The chromatographic separation was achieved on a Purosphere RP-18 column (particle size 5 µm) with a mobile phase consisting of Solvent A (acetonitrile) and Solvent B (water) with the following gradients: 0-7 min, 50% A in B; 7-9 min, 50-80% A in B; 9-20 min, 80% A in B at a flow rate of 1 mL/min and detection at 235 nm. The marker compounds were well separated on the chromatogram within 20 min. The results obtained indicate accuracy and reliability of the developed simultaneous HPLC method for the quantification of withaferin A and Z-guggulsterone. The proposed method was found to be reproducible, specific, precise and accurate for simultaneous estimation of these marker compounds in a combined dosage form. The HPLC method was appropriate and the two markers are well resolved, enabling efficient quantitative analysis of withaferin A and Z-guggulsterone. The method can be successively used for quantitative analysis of these two marker constituents in combination of marketed polyherbal formulation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Caught in the middle with multiple displacement amplification: the myth of pooling for avoiding multiple displacement amplification bias in a metagenome.

    PubMed

    Marine, Rachel; McCarren, Coleen; Vorrasane, Vansay; Nasko, Dan; Crowgey, Erin; Polson, Shawn W; Wommack, K Eric

    2014-01-30

    Shotgun metagenomics has become an important tool for investigating the ecology of microorganisms. Underlying these investigations is the assumption that metagenome sequence data accurately estimates the census of microbial populations. Multiple displacement amplification (MDA) of microbial community DNA is often used in cases where it is difficult to obtain enough DNA for sequencing; however, MDA can result in amplification biases that may impact subsequent estimates of population census from metagenome data. Some have posited that pooling replicate MDA reactions negates these biases and restores the accuracy of population analyses. This assumption has not been empirically tested. Using mock viral communities, we examined the influence of pooling on population-scale analyses. In pooled and single reaction MDA treatments, sequence coverage of viral populations was highly variable and coverage patterns across viral genomes were nearly identical, indicating that initial priming biases were reproducible and that pooling did not alleviate biases. In contrast, control unamplified sequence libraries showed relatively even coverage across phage genomes. MDA should be avoided for metagenomic investigations that require quantitative estimates of microbial taxa and gene functional groups. While MDA is an indispensable technique in applications such as single-cell genomics, amplification biases cannot be overcome by combining replicate MDA reactions. Alternative library preparation techniques should be utilized for quantitative microbial ecology studies utilizing metagenomic sequencing approaches.

  2. High-throughput process development: determination of dynamic binding capacity using microtiter filter plates filled with chromatography resin.

    PubMed

    Bergander, Tryggve; Nilsson-Välimaa, Kristina; Oberg, Katarina; Lacki, Karol M

    2008-01-01

    Steadily increasing demand for more efficient and more affordable biomolecule-based therapies put a significant burden on biopharma companies to reduce the cost of R&D activities associated with introduction of a new drug to the market. Reducing the time required to develop a purification process would be one option to address the high cost issue. The reduction in time can be accomplished if more efficient methods/tools are available for process development work, including high-throughput techniques. This paper addresses the transitions from traditional column-based process development to a modern high-throughput approach utilizing microtiter filter plates filled with a well-defined volume of chromatography resin. The approach is based on implementing the well-known batch uptake principle into microtiter plate geometry. Two variants of the proposed approach, allowing for either qualitative or quantitative estimation of dynamic binding capacity as a function of residence time, are described. Examples of quantitative estimation of dynamic binding capacities of human polyclonal IgG on MabSelect SuRe and of qualitative estimation of dynamic binding capacity of amyloglucosidase on a prototype of Capto DEAE weak ion exchanger are given. The proposed high-throughput method for determination of dynamic binding capacity significantly reduces time and sample consumption as compared to a traditional method utilizing packed chromatography columns without sacrificing the accuracy of data obtained.

  3. Patient-specific lean body mass can be estimated from limited-coverage computed tomography images.

    PubMed

    Devriese, Joke; Beels, Laurence; Maes, Alex; van de Wiele, Christophe; Pottel, Hans

    2018-06-01

    In PET/CT, quantitative evaluation of tumour metabolic activity is possible through standardized uptake values, usually normalized for body weight (BW) or lean body mass (LBM). Patient-specific LBM can be estimated from whole-body (WB) CT images. As most clinical indications only warrant PET/CT examinations covering head to midthigh, the aim of this study was to develop a simple and reliable method to estimate LBM from limited-coverage (LC) CT images and test its validity. Head-to-toe PET/CT examinations were retrospectively retrieved and semiautomatically segmented into tissue types based on thresholding of CT Hounsfield units. LC was obtained by omitting image slices. Image segmentation was validated on the WB CT examinations by comparing CT-estimated BW with actual BW, and LBM estimated from LC images were compared with LBM estimated from WB images. A direct method and an indirect method were developed and validated on an independent data set. Comparing LBM estimated from LC examinations with estimates from WB examinations (LBMWB) showed a significant but limited bias of 1.2 kg (direct method) and nonsignificant bias of 0.05 kg (indirect method). This study demonstrates that LBM can be estimated from LC CT images with no significant difference from LBMWB.

  4. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    PubMed

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Radar-derived quantitative precipitation estimation in complex terrain over the eastern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Gou, Yabin; Ma, Yingzhao; Chen, Haonan; Wen, Yixin

    2018-05-01

    Quantitative precipitation estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex spatial and temporal variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3264 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profile of reflectivity (VPR) clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method for all precipitation events in terms of score comparison using validation gauge measurements as references. It is also found that the SCIT-based approach can effectively mitigate the local error of radar QPE and represent the precipitation spatiotemporal variability better than the RT-based scheme.

  6. Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments

    PubMed Central

    Sun, Tongyang; Duan, Lihong; Wang, Yulong

    2017-01-01

    The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575

  7. Achieving across-laboratory replicability in psychophysical scaling

    PubMed Central

    Ward, Lawrence M.; Baumann, Michael; Moffat, Graeme; Roberts, Larry E.; Mori, Shuji; Rutledge-Taylor, Matthew; West, Robert L.

    2015-01-01

    It is well known that, although psychophysical scaling produces good qualitative agreement between experiments, precise quantitative agreement between experimental results, such as that routinely achieved in physics or biology, is rarely or never attained. A particularly galling example of this is the fact that power function exponents for the same psychological continuum, measured in different laboratories but ostensibly using the same scaling method, magnitude estimation, can vary by a factor of three. Constrained scaling (CS), in which observers first learn a standardized meaning for a set of numerical responses relative to a standard sensory continuum and then make magnitude judgments of other sensations using the learned response scale, has produced excellent quantitative agreement between individual observers’ psychophysical functions. Theoretically it could do the same for across-laboratory comparisons, although this needs to be tested directly. We compared nine different experiments from four different laboratories as an example of the level of across experiment and across-laboratory agreement achievable using CS. In general, we found across experiment and across-laboratory agreement using CS to be significantly superior to that typically obtained with conventional magnitude estimation techniques, although some of its potential remains to be realized. PMID:26191019

  8. Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-02-01

    We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.

  9. A mathematical function for the description of nutrient-response curve

    PubMed Central

    Ahmadi, Hamed

    2017-01-01

    Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271

  10. 75 FR 35990 - Endangered and Threatened Wildlife and Plants; Listing the Flying Earwig Hawaiian Damselfly and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-24

    ... this location in 2008. No quantitative estimate of the size of this remaining population is available... observed in 1998. No quantitative estimates of the size of the extant populations are available. Howarth...

  11. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test.

    PubMed

    Møller, Jonas B; Overgaard, Rune V; Madsen, Henrik; Hansen, Torben; Pedersen, Oluf; Ingwersen, Steen H

    2010-02-01

    Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of the OGTT is a difficult problem in need of further investigation. The present work aimed at investigating the power of SDEs to predict the first phase insulin secretion (AIR (0-8)) in the IVGTT based on parameters obtained from the minimal model of the OGTT, published by Breda et al. (Diabetes 50(1):150-158, 2001). In total 174 subjects underwent both an OGTT and a tolbutamide modified IVGTT. Estimation of parameters in the oral minimal model (OMM) was performed using the FOCE-method in NONMEM VI on insulin and C-peptide measurements. The suggested SDE models were based on a continuous AR(1) process, i.e. the Ornstein-Uhlenbeck process, and the extended Kalman filter was implemented in order to estimate the parameters of the models. Inclusion of the Ornstein-Uhlenbeck (OU) process caused improved description of the variation in the data as measured by the autocorrelation function (ACF) of one-step prediction errors. A main result was that application of SDE models improved the correlation between the individual first phase indexes obtained from OGTT and AIR (0-8) (r = 0.36 to r = 0.49 and r = 0.32 to r = 0.47 with C-peptide and insulin measurements, respectively). In addition to the increased correlation also the properties of the indexes obtained using the SDE models more correctly assessed the properties of the first phase indexes obtained from the IVGTT. In general it is concluded that the presented SDE approach not only caused autocorrelation of errors to decrease but also improved estimation of clinical measures obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method is concluded to have high relevance not only in theory but also in practice.

  12. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    PubMed

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  13. Monochloramine Disinfection Kinetics of Nitrosomonas europaea by Propidium Monoazide Quantitative PCR and Live/Dead BacLight Methods▿

    PubMed Central

    Wahman, David G.; Wulfeck-Kleier, Karen A.; Pressman, Jonathan G.

    2009-01-01

    Monochloramine disinfection kinetics were determined for the pure-culture ammonia-oxidizing bacterium Nitrosomonas europaea (ATCC 19718) by two culture-independent methods, namely, Live/Dead BacLight (LD) and propidium monoazide quantitative PCR (PMA-qPCR). Both methods were first verified with mixtures of heat-killed (nonviable) and non-heat-killed (viable) cells before a series of batch disinfection experiments with stationary-phase cultures (batch grown for 7 days) at pH 8.0, 25°C, and 5, 10, and 20 mg Cl2/liter monochloramine. Two data sets were generated based on the viability method used, either (i) LD or (ii) PMA-qPCR. These two data sets were used to estimate kinetic parameters for the delayed Chick-Watson disinfection model through a Bayesian analysis implemented in WinBUGS. This analysis provided parameter estimates of 490 mg Cl2-min/liter for the lag coefficient (b) and 1.6 × 10−3 to 4.0 × 10−3 liter/mg Cl2-min for the Chick-Watson disinfection rate constant (k). While estimates of b were similar for both data sets, the LD data set resulted in a greater k estimate than that obtained with the PMA-qPCR data set, implying that the PMA-qPCR viability measure was more conservative than LD. For N. europaea, the lag phase was not previously reported for culture-independent methods and may have implications for nitrification in drinking water distribution systems. This is the first published application of a PMA-qPCR method for disinfection kinetic model parameter estimation as well as its application to N. europaea or monochloramine. Ultimately, this PMA-qPCR method will allow evaluation of monochloramine disinfection kinetics for mixed-culture bacteria in drinking water distribution systems. PMID:19561179

  14. Security Events and Vulnerability Data for Cybersecurity Risk Estimation.

    PubMed

    Allodi, Luca; Massacci, Fabio

    2017-08-01

    Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.

  15. An Extensive Unified Thermo-Electric Module Characterization Method

    PubMed Central

    Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio

    2016-01-01

    Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios. PMID:27983575

  16. Automated estimation of choroidal thickness distribution and volume based on OCT images of posterior visual section.

    PubMed

    Vupparaboina, Kiran Kumar; Nizampatnam, Srinath; Chhablani, Jay; Richhariya, Ashutosh; Jana, Soumya

    2015-12-01

    A variety of vision ailments are indicated by anomalies in the choroid layer of the posterior visual section. Consequently, choroidal thickness and volume measurements, usually performed by experts based on optical coherence tomography (OCT) images, have assumed diagnostic significance. Now, to save precious expert time, it has become imperative to develop automated methods. To this end, one requires choroid outer boundary (COB) detection as a crucial step, where difficulty arises as the COB divides the choroidal granularity and the scleral uniformity only notionally, without marked brightness variation. In this backdrop, we measure the structural dissimilarity between choroid and sclera by structural similarity (SSIM) index, and hence estimate the COB by thresholding. Subsequently, smooth COB estimates, mimicking manual delineation, are obtained using tensor voting. On five datasets, each consisting of 97 adult OCT B-scans, automated and manual segmentation results agree visually. We also demonstrate close statistical match (greater than 99.6% correlation) between choroidal thickness distributions obtained algorithmically and manually. Further, quantitative superiority of our method is established over existing results by respective factors of 27.67% and 76.04% in two quotient measures defined relative to observer repeatability. Finally, automated choroidal volume estimation, being attempted for the first time, also yields results in close agreement with that of manual methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. An Extensive Unified Thermo-Electric Module Characterization Method.

    PubMed

    Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio

    2016-12-13

    Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios.

  18. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  19. Estimation of soil clay and organic matter using two quantitative methods (PLSR and MARS) based on reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Nawar, Said; Buddenbaum, Henning; Hill, Joachim

    2014-05-01

    A rapid and inexpensive soil analytical technique is needed for soil quality assessment and accurate mapping. This study investigated a method for improved estimation of soil clay (SC) and organic matter (OM) using reflectance spectroscopy. Seventy soil samples were collected from Sinai peninsula in Egypt to estimate the soil clay and organic matter relative to the soil spectra. Soil samples were scanned with an Analytical Spectral Devices (ASD) spectrometer (350-2500 nm). Three spectral formats were used in the calibration models derived from the spectra and the soil properties: (1) original reflectance spectra (OR), (2) first-derivative spectra smoothened using the Savitzky-Golay technique (FD-SG) and (3) continuum-removed reflectance (CR). Partial least-squares regression (PLSR) models using the CR of the 400-2500 nm spectral region resulted in R2 = 0.76 and 0.57, and RPD = 2.1 and 1.5 for estimating SC and OM, respectively, indicating better performance than that obtained using OR and SG. The multivariate adaptive regression splines (MARS) calibration model with the CR spectra resulted in an improved performance (R2 = 0.89 and 0.83, RPD = 3.1 and 2.4) for estimating SC and OM, respectively. The results show that the MARS models have a great potential for estimating SC and OM compared with PLSR models. The results obtained in this study have potential value in the field of soil spectroscopy because they can be applied directly to the mapping of soil properties using remote sensing imagery in arid environment conditions. Key Words: soil clay, organic matter, PLSR, MARS, reflectance spectroscopy.

  20. Quantitative morphology of the vascularisation of organs: A stereological approach illustrated using the cardiac circulation.

    PubMed

    Mühlfeld, Christian

    2014-01-01

    The vasculature of the heart is able to adapt to various physiological and pathological stimuli and its failure to do so is well-reflected by the great impact of ischaemic heart disease on personal morbidity and mortality and on the health care systems of industrial countries. Studies on physiological or genetic interventions as well as therapeutic angiogenesis rely on quantitative data to characterize the effects in a statistically robust way. The gold standard for obtaining quantitative morphological data is design-based stereology which allows the estimation of volume, surface area, length and number of blood vessels as well as their thickness, diameter or wall composition. Unfortunately, the use of stereological methods for this purpose is still rare. One of the reasons for this is the fact that the transfer of the theoretical foundations into laboratory practice requires a remarkable amount of considerations before touching the first piece of tissue. These considerations, however, are often based on already acquired experience and are usually not dealt with in stereological review articles. The present article therefore delineates the procedures for estimating the most important characteristics of the cardiac vasculature and highlights potential problems and their practical solutions. Worked examples are used to illustrate the methods and provide examples of the calculations. Hopefully, the considerations and examples contained herein will provide researchers in this field with the necessary equipment to add stereological methods to their study designs. Copyright © 2012 Elsevier GmbH. All rights reserved.

  1. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  2. Estimation of Metabolism Characteristics for Heat-Injured Bacteria Using Dielectrophoretic Impedance Measurement Method

    NASA Astrophysics Data System (ADS)

    Amako, Eri; Enjoji, Takaharu; Uchida, Satoshi; Tochikubo, Fumiyoshi

    Constant monitoring and immediate control of fermentation processes have been required for advanced quality preservation in food industry. In the present work, simple estimation of metabolic states for heat-injured Escherichia coli (E. coli) in a micro-cell was investigated using dielectrophoretic impedance measurement (DEPIM) method. Temporal change in the conductance between micro-gap (ΔG) was measured for various heat treatment temperatures. In addition, the dependence of enzyme activity, growth capacity and membrane situation for E. coli on heat treatment temperature was also analyzed with conventional biological methods. Consequently, a correlation between ΔG and those biological properties was obtained quantitatively. This result suggests that DEPIM method will be available for an effective monitoring technique for complex change in various biological states of microorganisms.

  3. IB-LBM simulation of the haemocyte dynamics in a stenotic capillary.

    PubMed

    Yuan-Qing, Xu; Xiao-Ying, Tang; Fang-Bao, Tian; Yu-Hua, Peng; Yong, Xu; Yan-Jun, Zeng

    2014-01-01

    To study the behaviour of a haemocyte when crossing a stenotic capillary, the immersed boundary-lattice Boltzmann method was used to establish a quantitative analysis model. The haemocyte was assumed to be spherical and to have an elastic cell membrane, which can be driven by blood flow to adopt a highly deformable character. In the stenotic capillary, the spherical blood cell was stressed both by the flow and the wall dimension, and the cell shape was forced to be stretched to cross the stenosis. Our simulation investigated the haemocyte crossing process in detail. The velocity and pressure were anatomised to obtain information on how blood flows through a capillary and to estimate the degree of cell damage caused by excessive pressure. Quantitative velocity analysis results demonstrated that a large haemocyte crossing a small stenosis would have a noticeable effect on blood flow, while quantitative pressure distribution analysis results indicated that the crossing process would produce a special pressure distribution in the cell interior and to some extent a sudden change between the cell interior and the surrounding plasma.

  4. Analysis of ribosomal RNA stability in dead cells of wine yeast by quantitative PCR.

    PubMed

    Sunyer-Figueres, Merce; Wang, Chunxiao; Mas, Albert

    2018-04-02

    During wine production, some yeasts enter a Viable But Not Culturable (VBNC) state, which may influence the quality and stability of the final wine through remnant metabolic activity or by resuscitation. Culture-independent techniques are used for obtaining an accurate estimation of the number of live cells, and quantitative PCR could be the most accurate technique. As a marker of cell viability, rRNA was evaluated by analyzing its stability in dead cells. The species-specific stability of rRNA was tested in Saccharomyces cerevisiae, as well as in three species of non-Saccharomyces yeast (Hanseniaspora uvarum, Torulaspora delbrueckii and Starmerella bacillaris). High temperature and antimicrobial dimethyl dicarbonate (DMDC) treatments were efficient in lysing the yeast cells. rRNA gene and rRNA (as cDNA) were analyzed over 48 h after cell lysis by quantitative PCR. The results confirmed the stability of rRNA for 48 h after the cell lysis treatments. To sum up, rRNA may not be a good marker of cell viability in the wine yeasts that were tested. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Estimation of diastolic intraventricular pressure gradients by Doppler M-mode echocardiography

    NASA Technical Reports Server (NTRS)

    Greenberg, N. L.; Vandervoort, P. M.; Firstenberg, M. S.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Previous studies have shown that small intraventricular pressure gradients (IVPG) are important for efficient filling of the left ventricle (LV) and as a sensitive marker for ischemia. Unfortunately, there has previously been no way of measuring these noninvasively, severely limiting their research and clinical utility. Color Doppler M-mode (CMM) echocardiography provides a spatiotemporal velocity distribution along the inflow tract throughout diastole, which we hypothesized would allow direct estimation of IVPG by using the Euler equation. Digital CMM images, obtained simultaneously with intracardiac pressure waveforms in six dogs, were processed by numerical differentiation for the Euler equation, then integrated to estimate IVPG and the total (left atrial to left ventricular apex) pressure drop. CMM-derived estimates agreed well with invasive measurements (IVPG: y = 0.87x + 0.22, r = 0.96, P < 0.001, standard error of the estimate = 0.35 mmHg). Quantitative processing of CMM data allows accurate estimation of IVPG and tracking of changes induced by beta-adrenergic stimulation. This novel approach provides unique information on LV filling dynamics in an entirely noninvasive way that has previously not been available for assessment of diastolic filling and function.

  6. Frequency characteristics of vibration generated by dual acoustic radiation force for estimating viscoelastic properties of biological tissues

    NASA Astrophysics Data System (ADS)

    Watanabe, Ryoichi; Arakawa, Mototaka; Kanai, Hiroshi

    2018-07-01

    We proposed a new method for estimating the viscoelastic property of the local region of a sample. The viscoelastic parameters of the phantoms simulating the biological tissues were quantitatively estimated by analyzing the frequency characteristics of displacement generated by acoustic excitation. The samples were locally strained by irradiating them with the ultrasound simultaneously generated from two point-focusing transducers by applying the sum of two signals with slightly different frequencies of approximately 1 MHz. The surface of a phantom was excited in the frequency range of 20–2,000 Hz, and its displacement was measured. The frequency dependence of the acceleration provided by the acoustic radiation force was also measured. From these results, we determined the frequency characteristics of the transfer function from the stress to the strain and estimated the ratio of the elastic modulus to the viscosity modulus (K/η) by fitting the data to the Maxwell model. Moreover, the elastic modulus K was separately estimated from the measured sound velocity and density of the phantom, and the viscosity modulus η was evaluated by substituting the estimated elastic modulus into the obtained K/η ratio.

  7. The Interaction Affinity between Vascular Cell Adhesion Molecule-1 (VCAM-1) and Very Late Antigen-4 (VLA-4) Analyzed by Quantitative FRET

    PubMed Central

    Wu, Shu-Han; Karmenyan, Artashes; Chiou, Arthur

    2015-01-01

    Very late antigen-4 (VLA-4), a member of integrin superfamily, interacts with its major counter ligand vascular cell adhesion molecule-1 (VCAM-1) and plays an important role in leukocyte adhesion to vascular endothelium and immunological synapse formation. However, irregular expressions of these proteins may also lead to several autoimmune diseases and metastasis cancer. Thus, quantifying the interaction affinity of the VCAM-1/VLA-4 interaction is of fundamental importance in further understanding the nature of this interaction and drug discovery. In this study, we report an ‘in solution’ steady state organic fluorophore based quantitative fluorescence resonance energy transfer (FRET) assay to quantify this interaction in terms of the dissociation constant (Kd). We have used, in our FRET assay, the Alexa Fluor 488-VLA-4 conjugate as the donor, and Alexa Fluor 546-VCAM-1 as the acceptor. From the FRET signal analysis, Kd of this interaction was determined to be 41.82 ± 2.36 nM. To further confirm our estimation, we have employed surface plasmon resonance (SPR) technique to obtain Kd = 39.60 ± 1.78 nM, which is in good agreement with the result obtained by FRET. This is the first reported work which applies organic fluorophore based ‘in solution’ simple quantitative FRET assay to obtain the dissociation constant of the VCAM-1/VLA-4 interaction, and is also the first quantification of this interaction. Moreover, the value of Kd can serve as an indicator of abnormal protein-protein interactions; hence, this assay can potentially be further developed into a drug screening platform of VLA-4/VCAM-1 as well as other protein-ligand interactions. PMID:25793408

  8. Allelic-based gene-gene interaction associated with quantitative traits.

    PubMed

    Jung, Jeesun; Sun, Bin; Kwon, Deukwoo; Koller, Daniel L; Foroud, Tatiana M

    2009-05-01

    Recent studies have shown that quantitative phenotypes may be influenced not only by multiple single nucleotide polymorphisms (SNPs) within a gene but also by the interaction between SNPs at unlinked genes. We propose a new statistical approach that can detect gene-gene interactions at the allelic level which contribute to the phenotypic variation in a quantitative trait. By testing for the association of allelic combinations at multiple unlinked loci with a quantitative trait, we can detect the SNP allelic interaction whether or not it can be detected as a main effect. Our proposed method assigns a score to unrelated subjects according to their allelic combination inferred from observed genotypes at two or more unlinked SNPs, and then tests for the association of the allelic score with a quantitative trait. To investigate the statistical properties of the proposed method, we performed a simulation study to estimate type I error rates and power and demonstrated that this allelic approach achieves greater power than the more commonly used genotypic approach to test for gene-gene interaction. As an example, the proposed method was applied to data obtained as part of a candidate gene study of sodium retention by the kidney. We found that this method detects an interaction between the calcium-sensing receptor gene (CaSR), the chloride channel gene (CLCNKB) and the Na, K, 2Cl cotransporter gene (CLC12A1) that contributes to variation in diastolic blood pressure.

  9. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  10. Estimating exposures in the asphalt industry for an international epidemiological cohort study of cancer risk.

    PubMed

    Burstyn, Igor; Boffetta, Paolo; Kauppinen, Timo; Heikkilä, Pirjo; Svane, Ole; Partanen, Timo; Stücker, Isabelle; Frentzel-Beyme, Rainer; Ahrens, Wolfgang; Merzenich, Hiltrud; Heederik, Dick; Hooiveld, Mariëtte; Langård, Sverre; Randem, Britt G; Järvholm, Bengt; Bergdahl, Ingvar; Shaham, Judith; Ribak, Joseph; Kromhout, Hans

    2003-01-01

    An exposure matrix (EM) for known and suspected carcinogens was required for a multicenter international cohort study of cancer risk and bitumen among asphalt workers. Production characteristics in companies enrolled in the study were ascertained through use of a company questionnaire (CQ). Exposures to coal tar, bitumen fume, organic vapor, polycyclic aromatic hydrocarbons, diesel fume, silica, and asbestos were assessed semi-quantitatively using information from CQs, expert judgment, and statistical models. Exposures of road paving workers to bitumen fume, organic vapor, and benzo(a)pyrene were estimated quantitatively by applying regression models, based on monitoring data, to exposure scenarios identified by the CQs. Exposures estimates were derived for 217 companies enrolled in the cohort, plus the Swedish asphalt paving industry in general. Most companies were engaged in road paving and asphalt mixing, but some also participated in general construction and roofing. Coal tar use was most common in Denmark and The Netherlands, but the practice is now obsolete. Quantitative estimates of exposure to bitumen fume, organic vapor, and benzo(a)pyrene for pavers, and semi-quantitative estimates of exposure to these agents among all subjects were strongly correlated. Semi-quantitative estimates of exposure to bitumen fume and coal tar exposures were only moderately correlated. EM assessed non-monotonic historical decrease in exposures to all agents assessed except silica and diesel exhaust. We produced a data-driven EM using methodology that can be adapted for other multicenter studies. Copyright 2003 Wiley-Liss, Inc.

  11. Comparison of LCModel and SAGE in Analysis of Brain Metabolite Concentrations-A study of Patients with Mild Cognitive Impairment.

    PubMed

    Shih, Chiu-Ming; Lai, Jui-Jen; Chang, Chin-Ching; Chen, Cheng-Sheng; Yeh, Yi-Chun; Jaw, Twei-Shiun; Hsu, Jui-Sheng; Li, Chun-Wei

    2017-03-15

    The purpose of this study was to compare brain metabolite concentration ratios determined by LCModel and Spectroscopy Analysis by General Electric (SAGE) quantitative methods to elucidate the advantages and disadvantages of each method. A total of 10 healthy volunteers and 10 patients with mild cognitive impairment (MCI) were recruited in this study. A point-resolved spectroscopy (PRESS) sequence was used to obtain the brain magnetic resonance spectroscopy (MRS) spectra of the volunteers and patients, as well as the General Electric (GE) MRS-HD-sphere phantom. The brain metabolite concentration ratios were estimated based on the peak area obtained from both LCModel and SAGE software. Three brain regions were sampled for each volunteer or patient, and 20 replicates were acquired at different times for the phantom analysis. The metabolite ratios of the GE phantom were estimated to be myo-inositol (mI)/creatine (Cr): 0.70 ± 0.01, choline (Cho)/Cr: 0.37 ± 0.00, N-acetylaspartate (NAA)/Cr: 1.26 ± 0.02, and NAA/mI: 1.81 ± 0.04 by LCModel, and mI/Cr: 0.88 ± 0.15, Cho/Cr: 0.35 ± 0.01, NAA/Cr: 1.33 ± 0.03, and NAA/mI: 1.55 ± 0.26 by SAGE. In the healthy volunteers and MCI patients, the ratios of mI/Cr and Cho/Cr estimated by LCModel were higher than those estimated by SAGE. In contrast, the ratio of NAA/Cr estimated by LCModel was lower than that estimated by SAGE. Both methods were acceptable in estimating brain metabolite concentration ratios. However, LCModel was marginally more accurate than SAGE because of its full automation, basis set, and user independency.

  12. Landslide Risk: Economic Valuation in The North-Eastern Zone of Medellin City

    NASA Astrophysics Data System (ADS)

    Vega, Johnny Alexander; Hidalgo, César Augusto; Johana Marín, Nini

    2017-10-01

    Natural disasters of a geodynamic nature can cause enormous economic and human losses. The economic costs of a landslide disaster include relocation of communities and physical repair of urban infrastructure. However, when performing a quantitative risk analysis, generally, the indirect economic consequences of such an event are not taken into account. A probabilistic approach methodology that considers several scenarios of hazard and vulnerability to measure the magnitude of the landslide and to quantify the economic costs is proposed. With this approach, it is possible to carry out a quantitative evaluation of the risk by landslides, allowing the calculation of the economic losses before a potential disaster in an objective, standardized and reproducible way, taking into account the uncertainty of the building costs in the study zone. The possibility of comparing different scenarios facilitates the urban planning process, the optimization of interventions to reduce risk to acceptable levels and an assessment of economic losses according to the magnitude of the damage. For the development and explanation of the proposed methodology, a simple case study is presented, located in north-eastern zone of the city of Medellín. This area has particular geomorphological characteristics, and it is also characterized by the presence of several buildings in bad structural conditions. The proposed methodology permits to obtain an estimative of the probable economic losses by earthquake-induced landslides, taking into account the uncertainty of the building costs in the study zone. The obtained estimative shows that the structural intervention of the buildings produces a reduction the order of 21 % in the total landslide risk.

  13. A neural networks application for the study of the influence of transport conditions on the working performance

    NASA Astrophysics Data System (ADS)

    Anghel, D.-C.; Ene, A.; Ştirbu, C.; Sicoe, G.

    2017-10-01

    This paper presents a study about the factors that influence the working performances of workers in the automotive industry. These factors regard mainly the transportations conditions, taking into account the fact that a large number of workers live in places that are far away of the enterprise. The quantitative data obtained from this study will be generalized by using a neural network, software simulated. The neural network is able to estimate the performance of workers even for the combinations of input factors that had been not recorded by the study. The experimental data obtained from the study will be divided in two classes. The first class that contains approximately 80% of data will be used by the Java software for the training of the neural network. The weights resulted from the training process will be saved in a text file. The other class that contains the rest of the 20% of experimental data will be used to validate the neural network. The training and the validation of the networks are performed in a Java software (TrainAndValidate java class). We designed another java class, Test.java that will be used with new input data, for new situations. The experimental data collected from the study. The software that simulated the neural network. The software that estimates the working performance, when new situations are met. This application is useful for human resources department of an enterprise. The output results are not quantitative. They are qualitative (from low performance to high performance, divided in five classes).

  14. Stereological assessment of mouse lung parenchyma via nondestructive, multiscale micro-CT imaging validated by light microscopic histology

    PubMed Central

    Vasilescu, Dragoş M.; Klinge, Christine; Knudsen, Lars; Yin, Leilei; Wang, Ge; Weibel, Ewald R.; Ochs, Matthias

    2013-01-01

    Quantitative assessment of the lung microstructure using standard stereological methods such as volume fractions of tissue, alveolar surface area, or number of alveoli, are essential for understanding the state of normal and diseased lung. These measures are traditionally obtained from histological sections of the lung tissue, a process that ultimately destroys the three-dimensional (3-D) anatomy of the tissue. In comparison, a novel X-ray-based imaging method that allows nondestructive sectioning and imaging of fixed lungs at multiple resolutions can overcome this limitation. Scanning of the whole lung at high resolution and subsequent regional sampling at ultrahigh resolution without physically dissecting the organ allows the application of design-based stereology for assessment of the whole lung structure. Here we validate multiple stereological estimates performed on micro–computed tomography (μCT) images by comparing them with those obtained via conventional histology on the same mouse lungs. We explore and discuss the potentials and limitations of the two approaches. Histological examination offers higher resolution and the qualitative differentiation of tissues by staining, but ultimately loses 3-D tissue relationships, whereas μCT allows for the integration of morphometric data with the spatial complexity of lung structure. However, μCT has limited resolution satisfactory for the sterological estimates presented in this study but not for differentiation of tissues. We conclude that introducing stereological methods in μCT studies adds value by providing quantitative information on internal structures while not curtailing more complex approaches to the study of lung architecture in the context of physiological or pathological studies. PMID:23264542

  15. Reactivity of propene, n-butene, and isobutene in the hydrogen transfer steps of n-hexane cracking over zeolites of different structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukyanov, D.B.

    The reaction of n-hexane cracking over HZSM-5, HY zeolite and mordenite (HM) was studied in accordance with the procedure of the [beta]-test recently proposed for quantitative characterization of zeolite hydrogen transfer activity. It is shown that this procedure allows one to obtain quantitative data on propene, n-butene, and isobutene reactivities in the hydrogen transfer steps of the reaction. The results demonstrate that in the absence of steric constraints (large pore HY and HM zeolites) isobutene is approximately 5 times more reactive in hydrogen transfer than n-butene. The latter, in turn, is about 1.3 times more reactive than propene. With mediummore » pore HZSM-5, steric inhibition of the hydrogen transfer between n-hexane and isobutene is observed. This results in a sharp decrease in the isobutene reactivity: over HZSM-5 zeolites isobutene is only 1.2 times more reactive in hydrogen transfer than n-butene. On the basis of these data it is concluded that the [beta]-test measures the [open quotes]real[close quotes] hydrogen transfer activity of zeolites, i.e., the activity that summarizes the effects of the acidic and structural properties of zeolites. An attempt is made to estimate the [open quotes]ideal[close quotes] zeolite hydrogen transfer activity, i.e., the activity determined by the zeolite acidic properties only. The estimations obtained show that this activity is approximately 1.8 and 1.6 times higher for HM zeolite in comparison with HZSM-5 and HY zeolites, respectively. 16 refs., 4 figs., 2 tabs.« less

  16. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  17. Natural extension of fast-slow decomposition for dynamical systems

    NASA Astrophysics Data System (ADS)

    Rubin, J. E.; Krauskopf, B.; Osinga, H. M.

    2018-01-01

    Modeling and parameter estimation to capture the dynamics of physical systems are often challenging because many parameters can range over orders of magnitude and are difficult to measure experimentally. Moreover, selecting a suitable model complexity requires a sufficient understanding of the model's potential use, such as highlighting essential mechanisms underlying qualitative behavior or precisely quantifying realistic dynamics. We present an approach that can guide model development and tuning to achieve desired qualitative and quantitative solution properties. It relies on the presence of disparate time scales and employs techniques of separating the dynamics of fast and slow variables, which are well known in the analysis of qualitative solution features. We build on these methods to show how it is also possible to obtain quantitative solution features by imposing designed dynamics for the slow variables in the form of specified two-dimensional paths in a bifurcation-parameter landscape.

  18. Can genetics help psychometrics? Improving dimensionality assessment through genetic factor modeling.

    PubMed

    Franić, Sanja; Dolan, Conor V; Borsboom, Denny; Hudziak, James J; van Beijsterveldt, Catherina E M; Boomsma, Dorret I

    2013-09-01

    In the present article, we discuss the role that quantitative genetic methodology may play in assessing and understanding the dimensionality of psychological (psychometric) instruments. Specifically, we study the relationship between the observed covariance structures, on the one hand, and the underlying genetic and environmental influences giving rise to such structures, on the other. We note that this relationship may be such that it hampers obtaining a clear estimate of dimensionality using standard tools for dimensionality assessment alone. One situation in which dimensionality assessment may be impeded is that in which genetic and environmental influences, of which the observed covariance structure is a function, differ from each other in structure and dimensionality. We demonstrate that in such situations settling dimensionality issues may be problematic, and propose using quantitative genetic modeling to uncover the (possibly different) dimensionalities of the underlying genetic and environmental structures. We illustrate using simulations and an empirical example on childhood internalizing problems.

  19. Quantitative evaluation of refolding conditions for a disulfide-bond-containing protein using a concise 18O-labeling technique

    PubMed Central

    Uchimura, Hiromasa; Kim, Yusam; Mizuguchi, Takaaki; Kiso, Yoshiaki; Saito, Kazuki

    2011-01-01

    A concise method was developed for quantifying native disulfide-bond formation in proteins using isotopically labeled internal standards, which were easily prepared with proteolytic 18O-labeling. As the method has much higher throughput to estimate the amounts of fragments possessing native disulfide arrangements by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) than the conventional high performance liquid chromatography (HPLC) analyses, it allows many different experimental conditions to be assessed in a short time. The method was applied to refolding experiments of a recombinant neuregulin 1-β1 EGF-like motif (NRG1-β1), and the optimum conditions for preparing native NRG1-β1 were obtained by quantitative comparisons. Protein disulfide isomerase (PDI) was most effective at the reduced/oxidized glutathione ratio of 2:1 for refolding the denatured sample NRG1-β1 with the native disulfide bonds. PMID:21500299

  20. Application of TSL Underwater Robots (AUV) for Investigation of Benthic Ecosystems and Quantification of Benthic Invertebrate Reserves

    NASA Astrophysics Data System (ADS)

    Golikov, S. Yu; Dulepov, V. I.; Maiorov, I. S.

    2017-11-01

    The issues on the application of autonomous underwater vehicles for assessing the abundance, biomass, distribution and reserves of invertebrates in the marine benthic ecosystems and on the environmental monitoring are discussed. An example of the application of methodology to assess some of the quantitative characteristics of macrobenthos is provided based upon using the information obtained from the TSL AUV in the Peter the Great Gulf (the Sea of Japan) in the Bay of Paris and the Eastern Bosphorus Strait within the area of the bridge leading to the Russian island. For the quantitative determination of the benthic invertebrate reserves, the values of biomass density of specific species are determined. Based on the data of direct measurements and weightings, the equations of weight dependencies on the size of animals are estimated according to the studied species that are well described by the power law dependence.

  1. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  2. Variation in the human ribs geometrical properties and mechanical response based on X-ray computed tomography images resolution.

    PubMed

    Perz, Rafał; Toczyski, Jacek; Subit, Damien

    2015-01-01

    Computational models of the human body are commonly used for injury prediction in automobile safety research. To create these models, the geometry of the human body is typically obtained from segmentation of medical images such as computed tomography (CT) images that have a resolution between 0.2 and 1mm/pixel. While the accuracy of the geometrical and structural information obtained from these images depend greatly on their resolution, the effect of image resolution on the estimation of the ribs geometrical properties has yet to be established. To do so, each of the thirty-four sections of ribs obtained from a Post Mortem Human Surrogate (PMHS) was imaged using three different CT modalities: standard clinical CT (clinCT), high resolution clinical CT (HRclinCT), and microCT. The images were processed to estimate the rib cross-section geometry and mechanical properties, and the results were compared to those obtained from the microCT images by computing the 'deviation factor', a metric that quantifies the relative difference between results obtained from clinCT and HRclinCT to those obtained from microCT. Overall, clinCT images gave a deviation greater than 100%, and were therefore deemed inadequate for the purpose of this study. HRclinCT overestimated the rib cross-sectional area by 7.6%, the moments of inertia by about 50%, and the cortical shell area by 40.2%, while underestimating the trabecular area by 14.7%. Next, a parametric analysis was performed to quantify how the variations in the estimate of the geometrical properties affected the rib predicted mechanical response under antero-posterior loading. A variation of up to 45% for the predicted peak force and up to 50% for the predicted stiffness was observed. These results provide a quantitative estimate of the sensitivity of the response of the FE model to the resolution of the images used to generate it. They also suggest that a correction factor could be derived from the comparison between microCT and HRclinCT images to improve the response of the model developed based on HRclinCT images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. 3D TOCSY-HSQC NMR for metabolic flux analysis using non-uniform sampling

    DOE PAGES

    Reardon, Patrick N.; Marean-Reardon, Carrie L.; Bukovec, Melanie A.; ...

    2016-02-05

    13C-Metabolic Flux Analysis ( 13C-MFA) is rapidly being recognized as the authoritative method for determining fluxes through metabolic networks. Site-specific 13C enrichment information obtained using NMR spectroscopy is a valuable input for 13C-MFA experiments. Chemical shift overlaps in the 1D or 2D NMR experiments typically used for 13C-MFA frequently hinder assignment and quantitation of site-specific 13C enrichment. Here we propose the use of a 3D TOCSY-HSQC experiment for 13C-MFA. We employ Non-Uniform Sampling (NUS) to reduce the acquisition time of the experiment to a few hours, making it practical for use in 13C-MFA experiments. Our data show that the NUSmore » experiment is linear and quantitative. Identification of metabolites in complex mixtures, such as a biomass hydrolysate, is simplified by virtue of the 13C chemical shift obtained in the experiment. In addition, the experiment reports 13C-labeling information that reveals the position specific labeling of subsets of isotopomers. As a result, the information provided by this technique will enable more accurate estimation of metabolic fluxes in larger metabolic networks.« less

  4. Contributed Review: Nuclear magnetic resonance core analysis at 0.3 T

    NASA Astrophysics Data System (ADS)

    Mitchell, Jonathan; Fordham, Edmund J.

    2014-11-01

    Nuclear magnetic resonance (NMR) provides a powerful toolbox for petrophysical characterization of reservoir core plugs and fluids in the laboratory. Previously, there has been considerable focus on low field magnet technology for well log calibration. Now there is renewed interest in the study of reservoir samples using stronger magnets to complement these standard NMR measurements. Here, the capabilities of an imaging magnet with a field strength of 0.3 T (corresponding to 12.9 MHz for proton) are reviewed in the context of reservoir core analysis. Quantitative estimates of porosity (saturation) and pore size distributions are obtained under favorable conditions (e.g., in carbonates), with the added advantage of multidimensional imaging, detection of lower gyromagnetic ratio nuclei, and short probe recovery times that make the system suitable for shale studies. Intermediate field instruments provide quantitative porosity maps of rock plugs that cannot be obtained using high field medical scanners due to the field-dependent susceptibility contrast in the porous medium. Example data are presented that highlight the potential applications of an intermediate field imaging instrument as a complement to low field instruments in core analysis and for materials science studies in general.

  5. A novel AIF tracking method and comparison of DCE-MRI parameters using individual and population-based AIFs in human breast cancer

    NASA Astrophysics Data System (ADS)

    Li, Xia; Welch, E. Brian; Arlinghaus, Lori R.; Bapsi Chakravarthy, A.; Xu, Lei; Farley, Jaime; Loveless, Mary E.; Mayer, Ingrid A.; Kelley, Mark C.; Meszoely, Ingrid M.; Means-Powell, Julie A.; Abramson, Vandana G.; Grau, Ana M.; Gore, John C.; Yankeelov, Thomas E.

    2011-09-01

    Quantitative analysis of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data requires the accurate determination of the arterial input function (AIF). A novel method for obtaining the AIF is presented here and pharmacokinetic parameters derived from individual and population-based AIFs are then compared. A Philips 3.0 T Achieva MR scanner was used to obtain 20 DCE-MRI data sets from ten breast cancer patients prior to and after one cycle of chemotherapy. Using a semi-automated method to estimate the AIF from the axillary artery, we obtain the AIF for each patient, AIFind, and compute a population-averaged AIF, AIFpop. The extended standard model is used to estimate the physiological parameters using the two types of AIFs. The mean concordance correlation coefficient (CCC) for the AIFs segmented manually and by the proposed AIF tracking approach is 0.96, indicating accurate and automatic tracking of an AIF in DCE-MRI data of the breast is possible. Regarding the kinetic parameters, the CCC values for Ktrans, vp and ve as estimated by AIFind and AIFpop are 0.65, 0.74 and 0.31, respectively, based on the region of interest analysis. The average CCC values for the voxel-by-voxel analysis are 0.76, 0.84 and 0.68 for Ktrans, vp and ve, respectively. This work indicates that Ktrans and vp show good agreement between AIFpop and AIFind while there is a weak agreement on ve.

  6. Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions

    EPA Science Inventory

    Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...

  7. A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY

    EPA Science Inventory

    We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...

  8. Low-dose CT for quantitative analysis in acute respiratory distress syndrome

    PubMed Central

    2013-01-01

    Introduction The clinical use of serial quantitative computed tomography (CT) to characterize lung disease and guide the optimization of mechanical ventilation in patients with acute respiratory distress syndrome (ARDS) is limited by the risk of cumulative radiation exposure and by the difficulties and risks related to transferring patients to the CT room. We evaluated the effects of tube current-time product (mAs) variations on quantitative results in healthy lungs and in experimental ARDS in order to support the use of low-dose CT for quantitative analysis. Methods In 14 sheep chest CT was performed at baseline and after the induction of ARDS via intravenous oleic acid injection. For each CT session, two consecutive scans were obtained applying two different mAs: 60 mAs was paired with 140, 15 or 7.5 mAs. All other CT parameters were kept unaltered (tube voltage 120 kVp, collimation 32 × 0.5 mm, pitch 0.85, matrix 512 × 512, pixel size 0.625 × 0.625 mm). Quantitative results obtained at different mAs were compared via Bland-Altman analysis. Results Good agreement was observed between 60 mAs and 140 mAs and between 60 mAs and 15 mAs (all biases less than 1%). A further reduction of mAs to 7.5 mAs caused an increase in the bias of poorly aerated and nonaerated tissue (-2.9% and 2.4%, respectively) and determined a significant widening of the limits of agreement for the same compartments (-10.5% to 4.8% for poorly aerated tissue and -5.9% to 10.8% for nonaerated tissue). Estimated mean effective dose at 140, 60, 15 and 7.5 mAs corresponded to 17.8, 7.4, 2.0 and 0.9 mSv, respectively. Image noise of scans performed at 140, 60, 15 and 7.5 mAs corresponded to 10, 16, 38 and 74 Hounsfield units, respectively. Conclusions A reduction of effective dose up to 70% has been achieved with minimal effects on lung quantitative results. Low-dose computed tomography provides accurate quantitative results and could be used to characterize lung compartment distribution and possibly monitor time-course of ARDS with a lower risk of exposure to ionizing radiation. A further radiation dose reduction is associated with lower accuracy in quantitative results. PMID:24004842

  9. The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.

    PubMed

    Carson, Cantwell G; Levine, Jonathan S

    2016-09-01

    The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  10. Method for Estimating the Charge Density Distribution on a Dielectric Surface.

    PubMed

    Nakashima, Takuya; Suhara, Hiroyuki; Murata, Hidekazu; Shimoyama, Hiroshi

    2017-06-01

    High-quality color output from digital photocopiers and laser printers is in strong demand, motivating attempts to achieve fine dot reproducibility and stability. The resolution of a digital photocopier depends on the charge density distribution on the organic photoconductor surface; however, directly measuring the charge density distribution is impossible. In this study, we propose a new electron optical instrument that can rapidly measure the electrostatic latent image on an organic photoconductor surface, which is a dielectric surface, as well as a novel method to quantitatively estimate the charge density distribution on a dielectric surface by combining experimental data obtained from the apparatus via a computer simulation. In the computer simulation, an improved three-dimensional boundary charge density method (BCM) is used for electric field analysis in the vicinity of the dielectric material with a charge density distribution. This method enables us to estimate the profile and quantity of the charge density distribution on a dielectric surface with a resolution of the order of microns. Furthermore, the surface potential on the dielectric surface can be immediately calculated using the obtained charge density. This method enables the relation between the charge pattern on the organic photoconductor surface and toner particle behavior to be studied; an understanding regarding the same may lead to the development of a new generation of higher resolution photocopiers.

  11. A quantitative characterization of the yeast heterotrimeric G protein cycle

    PubMed Central

    Yi, Tau-Mu; Kitano, Hiroaki; Simon, Melvin I.

    2003-01-01

    The yeast mating response is one of the best understood heterotrimeric G protein signaling pathways. Yet, most descriptions of this system have been qualitative. We have quantitatively characterized the heterotrimeric G protein cycle in yeast based on direct in vivo measurements. We used fluorescence resonance energy transfer to monitor the association state of cyan fluorescent protein (CFP)-Gα and Gβγ-yellow fluorescent protein (YFP), and we found that receptor-mediated G protein activation produced a loss of fluorescence resonance energy transfer. Quantitative time course and dose–response data were obtained for both wild-type and mutant cells possessing an altered pheromone response. These results paint a quantitative portrait of how regulators such as Sst2p and the C-terminal tail of α-factor receptor modulate the kinetics and sensitivity of G protein signaling. We have explored critical features of the dynamics including the rapid rise and subsequent decline of active G proteins during the early response, and the relationship between the G protein activation dose–response curve and the downstream dose–response curves for cell-cycle arrest and transcriptional induction. Fitting the data to a mathematical model produced estimates of the in vivo rates of heterotrimeric G protein activation and deactivation in yeast. PMID:12960402

  12. Multi-exponential analysis of magnitude MR images using a quantitative multispectral edge-preserving filter.

    PubMed

    Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre

    2003-03-01

    A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.

  13. Radial widths, optical depths, and eccentricities of the Uranian rings

    NASA Technical Reports Server (NTRS)

    Nicholson, P. D.; Matthews, K.; Goldreich, P.

    1982-01-01

    Observations of the stellar occultation by the Uranian rings of 15/16 August 1980 are used to estimate radial widths and normal optical depths for segments of rings 6, 5, 4, alpha, beta, eta, gamma, and delta. Synthetic occultation profiles are generated to match the observed light curves. A review of published data confirms the existence of width-radius relations for rings alpha and beta, and indicates that the optical depths of these two rings vary inversely with their radial widths. Masses are obtained for rings alpha and beta, on the assumption that differential precession is prevented by their self-gravity. A quantitative comparison of seven epsilon-ring occultation profiles obtained over a period of 3.4 yr reveals a consistent structure, which may reflect the presence of unresolved gaps and subrings.

  14. Soil sail content estimation in the yellow river delta with satellite hyperspectral data

    USGS Publications Warehouse

    Weng, Yongling; Gong, Peng; Zhu, Zhi-Liang

    2008-01-01

    Soil salinization is one of the most common land degradation processes and is a severe environmental hazard. The primary objective of this study is to investigate the potential of predicting salt content in soils with hyperspectral data acquired with EO-1 Hyperion. Both partial least-squares regression (PLSR) and conventional multiple linear regression (MLR), such as stepwise regression (SWR), were tested as the prediction model. PLSR is commonly used to overcome the problem caused by high-dimensional and correlated predictors. Chemical analysis of 95 samples collected from the top layer of soils in the Yellow River delta area shows that salt content was high on average, and the dominant chemicals in the saline soil were NaCl and MgCl2. Multivariate models were established between soil contents and hyperspectral data. Our results indicate that the PLSR technique with laboratory spectral data has a strong prediction capacity. Spectral bands at 1487-1527, 1971-1991, 2032-2092, and 2163-2355 nm possessed large absolute values of regression coefficients, with the largest coefficient at 2203 nm. We obtained a root mean squared error (RMSE) for calibration (with 61 samples) of RMSEC = 0.753 (R2 = 0.893) and a root mean squared error for validation (with 30 samples) of RMSEV = 0.574. The prediction model was applied on a pixel-by-pixel basis to a Hyperion reflectance image to yield a quantitative surface distribution map of soil salt content. The result was validated successfully from 38 sampling points. We obtained an RMSE estimate of 1.037 (R2 = 0.784) for the soil salt content map derived by the PLSR model. The salinity map derived from the SWR model shows that the predicted value is higher than the true value. These results demonstrate that the PLSR method is a more suitable technique than stepwise regression for quantitative estimation of soil salt content in a large area. ?? 2008 CASI.

  15. Dynamic soft tissue deformation estimation based on energy analysis

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Yao, Bin

    2016-10-01

    The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.

  16. BeerOz, a set of Matlab routines for the quantitative interpretation of spectrophotometric measurements of metal speciation in solution

    NASA Astrophysics Data System (ADS)

    Brugger, Joël

    2007-02-01

    The modelling of the speciation and mobility of metals under surface and hydrothermal conditions relies on the availability of accurate thermodynamic properties for all relevant minerals, aqueous species, gases and surface species. Spectroscopic techniques obeying the Beer-Lambert law can be used to obtain thermodynamic properties for reactions among aqueous species (e.g., ligand substitution; protonation). BeerOz is a set of Matlab routines designed to perform both qualitative and quantitative analysis of spectroscopic data following the Beer-Lambert law. BeerOz is modular and can be customised for particular experimental strategies or for simultaneous refinement of several datasets obtained using different techniques. Distribution of species calculations are performed using an implementation of the EQBRM code, which allows for customised activity coefficient calculations. BeerOz also contains routines to study the n-dimensional solution space, in order to provide realistic estimates of errors and test for the existence of multiple local minima and correlation between the different refined variables. The paper reviews the physical principles underlying the qualitative and quantitative analysis of spectroscopic data collected on aqueous speciation, in particular for studying successive ligand replacement reactions, and presents the non-linear least-squares algorithm implemented in BeerOz. The discussion is illustrated using UV-Vis spectra collected on acidic Fe(III) solutions containing varying LiCl concentrations, and showing the change from the hexaaquo Fe(H 2O) 63+ complex to the tetrahedral FeCl 4- complex.

  17. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  18. Quantitative endoscopy: initial accuracy measurements.

    PubMed

    Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P

    2000-02-01

    The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.

  19. Potency factors for risk assessment at Libby, Montana.

    PubMed

    Moolgavkar, Suresh H; Turim, Jay; Alexander, Dominik D; Lau, Edmund C; Cushing, Colleen A

    2010-08-01

    We reanalyzed the Libby vermiculite miners' cohort assembled by Sullivan to estimate potency factors for lung cancer, mesothelioma, nonmalignant respiratory disease (NMRD), and all-cause mortality associated with exposure to Libby fibers. Our principal statistical tool for analyses of lung cancer, NMRD, and total mortality in the cohort was the time-dependent proportional hazards model. For mesothelioma, we used an extension of the Peto formula. For a cumulative exposure to Libby fiber of 100 f/mL-yr, our estimates of relative risk (RR) are as follows: lung cancer, RR = 1.12, 95% confidence interval (CI) =[1.06, 1.17]; NMRD, RR = 1.14, 95% CI =[1.09, 1.18]; total mortality, RR = 1.06, 95% CI =[1.04, 1.08]. These estimates were virtually identical when analyses were restricted to the subcohort of workers who were employed for at least one year. For mesothelioma, our estimate of potency is K(M) = 0.5 x 10(-8), 95% CI =[0.3 x 10(-8), 0.8 x 10(-8)]. Finally, we estimated the mortality ratios standardized against the U.S. population for lung cancer, NMRD, and total mortality and obtained estimates that were in good agreement with those reported by Sullivan. The estimated potency factors form the basis for a quantitative risk assessment at Libby.

  20. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    PubMed

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  1. Modeling the Residual Strength of a Fibrous Composite Using the Residual Daniels Function

    NASA Astrophysics Data System (ADS)

    Paramonov, Yu.; Cimanis, V.; Varickis, S.; Kleinhofs, M.

    2016-09-01

    The concept of a residual Daniels function (RDF) is introduced. Together with the concept of Daniels sequence, the RDF is used for estimating the residual (after some preliminary fatigue loading) static strength of a unidirectional fibrous composite (UFC) and its S-N curve on the bases of test data. Usually, the residual strength is analyzed on the basis of a known S-N curve. In our work, an inverse approach is used: the S-N curve is derived from an analysis of the residual strength. This approach gives a good qualitive description of the process of decreasing residual strength and explanes the existence of the fatigue limit. The estimates of parameters of the corresponding regression model can be interpreted as estimates of parameters of the local strength of components of the UFC. In order to approach the quantitative experimental estimates of the fatigue life, some ideas based on the mathematics of the semiMarkovian process are employed. Satisfactory results in processing experimental data on the fatigue life and residual strength of glass/epoxy laminates are obtained.

  2. Estimation of number of fatalities caused by toxic gases due to fire in road tunnels.

    PubMed

    Qu, Xiaobo; Meng, Qiang; Liu, Zhiyuan

    2013-01-01

    The quantitative risk assessment (QRA) is one of the explicit requirements under the European Union (EU) Directive (2004/54/EC). As part of this, it is essential to be able to estimate the number of fatalities in different accident scenarios. In this paper, a tangible methodology is developed to estimate the number of fatalities caused by toxic gases due to fire in road tunnels by incorporating traffic flow and the spread of fire in tunnels. First, a deterministic queuing model is proposed to calculate the number of people at risk, by taking into account tunnel geometry, traffic flow patterns, and incident response plans for road tunnels. Second, the Fire Dynamics Simulator (FDS) is used to obtain the temperature and concentrations of CO, CO(2), and O(2). By taking advantage of the additivity of the fractional effective dose (FED) method, fatality rates for different locations in given time periods can be estimated. An illustrative case study is carried out to demonstrate the applicability of the proposed methodology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Vapor space characterization of Waste Tank 241-TY-104 (in situ): Results from samples collected on 8/5/94

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ligotke, M.W.; Pool, K.H.; Lucke, R.B.

    1995-10-01

    This report describes inorganic and organic analyses results from in situ samples obtained from the headspace of the Hanford waste storage Tank 241-TY-104 (referred to as Tank TY-104). The results described here were obtained to support safety and toxicological evaluations. A summary of the results for inorganic and organic analytes is listed in Table 1. Detailed descriptions of the results appear in the text. Quantitative results were obtained for the inorganic compounds ammonia (NH{sub 3}), nitrogen dioxide (NO{sub 2}), nitric oxide (NO), and water (H{sub 2}O). Sampling for hydrogen cyanide (HCN) and sulfur oxides (SO{sub x}) was not performed. Inmore » addition, the authors looked for the 39 TO-14 compounds plus an additional 14 analytes. Of these, eight were observed above the 5-ppbv reporting cutoff. Twenty-four organic tentatively identified compounds (TICs) were observed above the reporting cutoff of (ca.) 10 ppbv and are reported with concentrations that are semiquantitative estimates based on internal standard response factors. The 10 organic analytes with the highest estimated concentrations are listed in Table 1 and account for approximately 86% of the total organic components in Tank TY-104. Tank TY-104 is on the Ferrocyanide Watch List.« less

  4. Eggshells as an index of aedine mosquito production. 2: Relationship of Aedes taeniorhynchus eggshell density to larval production.

    PubMed

    Addison, D S; Ritchie, S A; Webber, L A; Van Essen, F

    1992-03-01

    To test if eggshell density could be used as an index of aedine mosquito production, we compared eggshell density with the larval production of Aedes taeniorhynchus in Florida mangrove basin forests. Quantitative (n = 7) and categorical (n = 34) estimates of annual larval production were regressed against the number of eggshells per cc of soil. Significant regressions were obtained in both instances. Larval production was concentrated in zones with the highest eggshell density. We suggest that eggshell density and distribution can be used to identify oviposition sites and the sequence of larval appearance.

  5. Three-dimensional real-time imaging of bi-phasic flow through porous media

    NASA Astrophysics Data System (ADS)

    Sharma, Prerna; Aswathi, P.; Sane, Anit; Ghosh, Shankar; Bhattacharya, S.

    2011-11-01

    We present a scanning laser-sheet video imaging technique to image bi-phasic flow in three-dimensional porous media in real time with pore-scale spatial resolution, i.e., 35 μm and 500 μm for directions parallel and perpendicular to the flow, respectively. The technique is illustrated for the case of viscous fingering. Using suitable image processing protocols, both the morphology and the movement of the two-fluid interface, were quantitatively estimated. Furthermore, a macroscopic parameter such as the displacement efficiency obtained from a microscopic (pore-scale) analysis demonstrates the versatility and usefulness of the method.

  6. Downwind hazard calculations for space shuttle launches at Kennedy Space Center and Vandenberg Air Force Base

    NASA Technical Reports Server (NTRS)

    Susko, M.; Hill, C. K.; Kaufman, J. W.

    1974-01-01

    The quantitative estimates are presented of pollutant concentrations associated with the emission of the major combustion products (HCl, CO, and Al2O3) to the lower atmosphere during normal launches of the space shuttle. The NASA/MSFC Multilayer Diffusion Model was used to obtain these calculations. Results are presented for nine sets of typical meteorological conditions at Kennedy Space Center, including fall, spring, and a sea-breeze condition, and six sets at Vandenberg AFB. In none of the selected typical meteorological regimes studied was a 10-min limit of 4 ppm exceeded.

  7. Quantitative coronary angiography using image recovery techniques for background estimation in unsubtracted images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Jerry T.; Kamyar, Farzad; Molloi, Sabee

    2007-10-15

    Densitometry measurements have been performed previously using subtracted images. However, digital subtraction angiography (DSA) in coronary angiography is highly susceptible to misregistration artifacts due to the temporal separation of background and target images. Misregistration artifacts due to respiration and patient motion occur frequently, and organ motion is unavoidable. Quantitative densitometric techniques would be more clinically feasible if they could be implemented using unsubtracted images. The goal of this study is to evaluate image recovery techniques for densitometry measurements using unsubtracted images. A humanoid phantom and eight swine (25-35 kg) were used to evaluate the accuracy and precision of the followingmore » image recovery techniques: Local averaging (LA), morphological filtering (MF), linear interpolation (LI), and curvature-driven diffusion image inpainting (CDD). Images of iodinated vessel phantoms placed over the heart of the humanoid phantom or swine were acquired. In addition, coronary angiograms were obtained after power injections of a nonionic iodinated contrast solution in an in vivo swine study. Background signals were estimated and removed with LA, MF, LI, and CDD. Iodine masses in the vessel phantoms were quantified and compared to known amounts. Moreover, the total iodine in left anterior descending arteries was measured and compared with DSA measurements. In the humanoid phantom study, the average root mean square errors associated with quantifying iodine mass using LA and MF were approximately 6% and 9%, respectively. The corresponding average root mean square errors associated with quantifying iodine mass using LI and CDD were both approximately 3%. In the in vivo swine study, the root mean square errors associated with quantifying iodine in the vessel phantoms with LA and MF were approximately 5% and 12%, respectively. The corresponding average root mean square errors using LI and CDD were both 3%. The standard deviations in the differences between measured iodine mass in left anterior descending arteries using DSA and LA, MF, LI, or CDD were calculated. The standard deviations in the DSA-LA and DSA-MF differences (both {approx}21 mg) were approximately a factor of 3 greater than that of the DSA-LI and DSA-CDD differences (both {approx}7 mg). Local averaging and morphological filtering were considered inadequate for use in quantitative densitometry. Linear interpolation and curvature-driven diffusion image inpainting were found to be effective techniques for use with densitometry in quantifying iodine mass in vitro and in vivo. They can be used with unsubtracted images to estimate background anatomical signals and obtain accurate densitometry results. The high level of accuracy and precision in quantification associated with using LI and CDD suggests the potential of these techniques in applications where background mask images are difficult to obtain, such as lumen volume and blood flow quantification using coronary arteriography.« less

  8. Application of common y-intercept regression parameters for log Kp vs 1/ T for predicting gas-particle partitioning in the urban environment

    NASA Astrophysics Data System (ADS)

    Pankow, James F.

    Gas-particle partitioning is examined using a partitioning constant Kp = ( F/ TSP)/ A, where F (ng m -3) and A (ng m -3) are the particulate-associated and gas-phase concentrations, respectively, and TSP is the total suspended particulate matter level (μg m -3). Compound-dependent values of Kp depend on temperature ( T) according to Kp = mp/ T + bp. Limitations in data quality can cause errors in estimates of mp and bp obtained by simple linear regression (SLR). However, within a group of similar compounds, the bp values will be similar. By pooling data, an improved set of mp and a single bp can be obtained by common y-intercept regression (CYIR). SLR estimates for mp and bp for polycyclic aromatic hydrocarbons (PAHs) sorbing to urban Osaka particulate matter are available (Yamasaki et al., 1982, Envir. Sci. Technol.16, 189-194), as are CYIR estimates for the same particulate matter (Pankow, 1991, Atmospheric Environment25A, 2229-2239). In this work, a comparison was conducted of the ability of these two sets of mp and bp to predict A/ F ratios for PAHs based on measured T and TSP values for data obtained in other urban locations, specifically: (1) in and near the Baltimore Harbor Tunnel by Benner (1988, Ph.D thesis, University of Maryland) and Benner et al. (1989, Envir. Sci. Technol.23, 1269-1278); and (2) in Chicago by Cotham (1990, Ph.D. thesis, University of South Carolina). In general, the CYIR estimates for mp and bp obtained for Osaka particulate matter were found to be at least as reliable, and for some compounds more reliable than their SLR counterparts in predicting gas-particle ratios for PAHs. This result provides further evidence of the utility of the CYIR approach in quantitating the dependence of log Kp values on 1/ T.

  9. Decoding 2D-PAGE complex maps: relevance to proteomics.

    PubMed

    Pietrogrande, Maria Chiara; Marchetti, Nicola; Dondi, Francesco; Righetti, Pier Giorgio

    2006-03-20

    This review describes two mathematical approaches useful for decoding the complex signal of 2D-PAGE maps of protein mixtures. These methods are helpful for interpreting the large amount of data of each 2D-PAGE map by extracting all the analytical information hidden therein by spot overlapping. Here the basic theory and application to 2D-PAGE maps are reviewed: the means for extracting information from the experimental data and their relevance to proteomics are discussed. One method is based on the quantitative theory of statistical model of peak overlapping (SMO) using the spot experimental data (intensity and spatial coordinates). The second method is based on the study of the 2D-autocovariance function (2D-ACVF) computed on the experimental digitised map. They are two independent methods that are able to extract equal and complementary information from the 2D-PAGE map. Both methods permit to obtain fundamental information on the sample complexity and the separation performance and to single out ordered patterns present in spot positions: the availability of two independent procedures to compute the same separation parameters is a powerful tool to estimate the reliability of the obtained results. The SMO procedure is an unique tool to quantitatively estimate the degree of spot overlapping present in the map, while the 2D-ACVF method is particularly powerful in simply singling out the presence of order in the spot position from the complexity of the whole 2D map, i.e., spot trains. The procedures were validated by extensive numerical computation on computer-generated maps describing experimental 2D-PAGE gels of protein mixtures. Their applicability to real samples was tested on reference maps obtained from literature sources. The review describes the most relevant information for proteomics: sample complexity, separation performance, overlapping extent, identification of spot trains related to post-translational modifications (PTMs).

  10. [Detecting the moisture content of forest surface soil based on the microwave remote sensing technology.

    PubMed

    Li, Ming Ze; Gao, Yuan Ke; Di, Xue Ying; Fan, Wen Yi

    2016-03-01

    The moisture content of forest surface soil is an important parameter in forest ecosystems. It is practically significant for forest ecosystem related research to use microwave remote sensing technology for rapid and accurate estimation of the moisture content of forest surface soil. With the aid of TDR-300 soil moisture content measuring instrument, the moisture contents of forest surface soils of 120 sample plots at Tahe Forestry Bureau of Daxing'anling region in Heilongjiang Province were measured. Taking the moisture content of forest surface soil as the dependent variable and the polarization decomposition parameters of C band Quad-pol SAR data as independent variables, two types of quantitative estimation models (multilinear regression model and BP-neural network model) for predicting moisture content of forest surface soils were developed. The spatial distribution of moisture content of forest surface soil on the regional scale was then derived with model inversion. Results showed that the model precision was 86.0% and 89.4% with RMSE of 3.0% and 2.7% for the multilinear regression model and the BP-neural network model, respectively. It indicated that the BP-neural network model had a better performance than the multilinear regression model in quantitative estimation of the moisture content of forest surface soil. The spatial distribution of forest surface soil moisture content in the study area was then obtained by using the BP neural network model simulation with the Quad-pol SAR data.

  11. Model Mismatch Paradigm for Probe based Nanoscale Imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Pranav

    Scanning Probe Microscopes (SPMs) are widely used for investigation of material properties and manipulation of matter at the nanoscale. These instruments are considered critical enablers of nanotechnology by providing the only technique for direct observation of dynamics at the nanoscale and affecting it with sub Angstrom resolution. Current SPMs are limited by low throughput and lack of quantitative measurements of material properties. Various applications like the high density data storage, sub-20 nm lithography, fault detection and functional probing of semiconductor circuits, direct observation of dynamical processes involved in biological samples viz. motor proteins and transport phenomena in various materials demand high throughput operation. Researchers involved in material characterization at nanoscale are interested in getting quantitative measurements of stiffness and dissipative properties of various materials in a least invasive manner. In this thesis, system theoretic concepts are used to address these limitations. The central tenet of the thesis is to model, the known information about the system and then focus on perturbations of these known dynamics or model, to sense the effects due to changes in the environment such as changes in material properties or surface topography. Thus a model mismatch paradigm for probe based nanoscale imaging is developed. The topic is developed by presenting physics based modeling of a particular mode of operation of SPMs called the dynamic mode operation. This mode is modeled as a forced Lure system where a linear time invariant system is in feedback with an unknown static memoryless nonlinearity. Tools from averaging theory are used to tame this complex nonlinear system by approximating it as a linear system with time varying parameters. Material properties are thus transformed from being parameters of unknown nonlinear functions to being unknown coefficients of a linear plant. The first contribution of this thesis deals with real time detection and reduction of spurious areas in the image which are also known as probe-loss areas. These areas become severely detrimental during high speed operations. The detection strategy is based on thresholding of a distance measure, which captures the difference between sensor models in absence and presence of probe-loss. A switching gain control strategy based on the output of a Kalman Filter is used to reduce probe-loss areas in real time. The efficacy of this technique is demonstrated through experimental results showing increased image fidelity at scan rates that are 10 times faster than conventional scan rates. The second contribution of this thesis deals with developing multi-frequency input excitation strategy and deriving a bias compensated adaptive parameter estimation strategy to determine the instantaneous equivalent cantilever model. This is used to address the challenge of quantitative imaging at high bandwidth operation by relating the estimated plant coefficients to conservative and dissipative components of tip-sample interaction. The efficacy of the technique is demonstrated for quantitative material characterization of a polymer sample, resulting in material information not previously obtainable during dynamic mode operation. This information is obtained at speeds which are two orders faster than existing techniques. Quantitative verification strategies for the accuracy of estimated parameters are presented. The third contribution of this thesis deals with developing real time tractable models and characterization methodology for an electrostatically actuated MEMS cantilever with an integrated solid state thermal sensor. Appropriate modeling assumptions are made to delineate various nonlinear forces on the cantilever viz. electrostatic force, tip-sample interaction force and capacitive coupling. Experimental strategy is presented to measure the thermal sensing transfer function from DC-100kHz. A quantitative match between experimental and simulated data is obtained for the large range nonlinearities and small signal dynamics.

  12. RIVER LEVEL ESTIMATION USING ARTIFICIAL NEURAL NETWORK FOR URBAN SMALL RIVER IN TIDAL REACH

    NASA Astrophysics Data System (ADS)

    Takasaki, Tadakatsu; Kawamura, Akira; Amaguchi, Hideo

    Prediction of water level in small rivers is great interest for flood control in an urban area located in the river mouth. The tidal river water level is affected by not only flood discharge but also tide, atmospheric pressure, wind direction and speed. We propose a method of estimating river water level considering these factors using an artificial neural network model for the Kanda River located in the center of Tokyo. The effects by those factors are quantitatively investigated. As for the effects by the atmospheric pressure, river water level rises about 7cm per 5hPa increase of the pressure regardless of river discharge under the conditions of 1m/s wind speed and north wind direction. The accurate rating curve for the tidal river is finally obtained.

  13. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors

    PubMed Central

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-01-01

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028

  14. Time-of-flight PET time calibration using data consistency

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2018-05-01

    This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.

  15. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.

    PubMed

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-04-03

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.

  16. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    PubMed

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Semi-quantitative estimation of cellular SiO2 nanoparticles using flow cytometry combined with X-ray fluorescence measurements.

    PubMed

    Choi, Seo Yeon; Yang, Nuri; Jeon, Soo Kyung; Yoon, Tae Hyun

    2014-09-01

    In this study, we have demonstrated feasibility of a semi-quantitative approach for the estimation of cellular SiO2 nanoparticles (NPs), which is based on the flow cytometry measurements of their normalized side scattering intensity. In order to improve our understanding on the quantitative aspects of cell-nanoparticle interactions, flow cytometry, transmission electron microscopy, and X-ray fluorescence experiments were carefully performed for the HeLa cells exposed to SiO2 NPs with different core diameters, hydrodynamic sizes, and surface charges. Based on the observed relationships among the experimental data, a semi-quantitative cellular SiO2 NPs estimation method from their normalized side scattering and core diameters was proposed, which can be applied for the determination of cellular SiO2 NPs within their size-dependent linear ranges. © 2014 International Society for Advancement of Cytometry.

  18. Detection of sea otters in boat-based surveys of Prince William Sound, Alaska

    USGS Publications Warehouse

    Udevitz, Mark S.; Bodkin, James L.; Costa, Daniel P.

    1995-01-01

    Boat-based surveys have been commonly used to monitor sea otter populations, but there has been little quantitative work to evaluate detection biases that may affect these surveys. We used ground-based observers to investigate sea otter detection probabilities in a boat-based survey of Prince William Sound, Alaska. We estimated that 30% of the otters present on surveyed transects were not detected by boat crews. Approximately half (53%) of the undetected otters were missed because the otters left the transects, apparently in response to the approaching boat. Unbiased estimates of detection probabilities will be required for obtaining unbiased population estimates from boat-based surveys of sea otters. Therefore, boat-based surveys should include methods to estimate sea otter detection probabilities under the conditions specific to each survey. Unbiased estimation of detection probabilities with ground-based observers requires either that the ground crews detect all of the otters in observed subunits, or that there are no errors in determining which crews saw each detected otter. Ground-based observer methods may be appropriate in areas where nearly all of the sea otter habitat is potentially visible from ground-based vantage points.

  19. Acute Gastrointestinal Illness Risks in North Carolina Community Water Systems: A Methodological Comparison.

    PubMed

    DeFelice, Nicholas B; Johnston, Jill E; Gibson, Jacqueline MacDonald

    2015-08-18

    The magnitude and spatial variability of acute gastrointestinal illness (AGI) cases attributable to microbial contamination of U.S. community drinking water systems are not well characterized. We compared three approaches (drinking water attributable risk, quantitative microbial risk assessment, and population intervention model) to estimate the annual number of emergency department visits for AGI attributable to microorganisms in North Carolina community water systems. All three methods used 2007-2013 water monitoring and emergency department data obtained from state agencies. The drinking water attributable risk method, which was the basis for previous U.S. Environmental Protection Agency national risk assessments, estimated that 7.9% of annual emergency department visits for AGI are attributable to microbial contamination of community water systems. However, the other methods' estimates were more than 2 orders of magnitude lower, each attributing 0.047% of annual emergency department visits for AGI to community water system contamination. The differences in results between the drinking water attributable risk method, which has been the main basis for previous national risk estimates, and the other two approaches highlight the need to improve methods for estimating endemic waterborne disease risks, in order to prioritize investments to improve community drinking water systems.

  20. The Effect of Pickling on Blue Borscht Gelatin and Other Interesting Diffusive Phenomena.

    ERIC Educational Resources Information Center

    Davis, Lawrence C.; Chou, Nancy C.

    1998-01-01

    Presents some simple demonstrations that students can construct for themselves in class to learn the difference between diffusion and convection rates. Uses cabbage leaves and gelatin and focuses on diffusion in ungelified media, a quantitative diffusion estimate with hydroxyl ions, and a quantitative diffusion estimate with photons. (DDR)

  1. Accuracy of commercially available c-reactive protein rapid tests in the context of undifferentiated fevers in rural Laos.

    PubMed

    Phommasone, Koukeo; Althaus, Thomas; Souvanthong, Phonesavanh; Phakhounthong, Khansoudaphone; Soyvienvong, Laxoy; Malapheth, Phatthaphone; Mayxay, Mayfong; Pavlicek, Rebecca L; Paris, Daniel H; Dance, David; Newton, Paul; Lubell, Yoel

    2016-02-04

    C-Reactive Protein (CRP) has been shown to be an accurate biomarker for discriminating bacterial from viral infections in febrile patients in Southeast Asia. Here we investigate the accuracy of existing rapid qualitative and semi-quantitative tests as compared with a quantitative reference test to assess their potential for use in remote tropical settings. Blood samples were obtained from consecutive patients recruited to a prospective fever study at three sites in rural Laos. At each site, one of three rapid qualitative or semi-quantitative tests was performed, as well as a corresponding quantitative NycoCard Reader II as a reference test. We estimate the sensitivity and specificity of the three tests against a threshold of 10 mg/L and kappa values for the agreement of the two semi-quantitative tests with the results of the reference test. All three tests showed high sensitivity, specificity and kappa values as compared with the NycoCard Reader II. With a threshold of 10 mg/L the sensitivity of the tests ranged from 87-98 % and the specificity from 91-98 %. The weighted kappa values for the semi-quantitative tests were 0.7 and 0.8. The use of CRP rapid tests could offer an inexpensive and effective approach to improve the targeting of antibiotics in remote settings where health facilities are basic and laboratories are absent. This study demonstrates that accurate CRP rapid tests are commercially available; evaluations of their clinical impact and cost-effectiveness at point of care is warranted.

  2. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms

    NASA Astrophysics Data System (ADS)

    Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.

    2016-10-01

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.

  4. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms

    PubMed Central

    Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.

    2016-01-01

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification. PMID:27762292

  5. Quantitative assessment of cervical vertebral maturation using cone beam computed tomography in Korean girls.

    PubMed

    Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Son, Woo-Sung

    2015-01-01

    This study was aimed to examine the correlation between skeletal maturation status and parameters from the odontoid process/body of the second vertebra and the bodies of third and fourth cervical vertebrae and simultaneously build multiple regression models to be able to estimate skeletal maturation status in Korean girls. Hand-wrist radiographs and cone beam computed tomography (CBCT) images were obtained from 74 Korean girls (6-18 years of age). CBCT-generated cervical vertebral maturation (CVM) was used to demarcate the odontoid process and the body of the second cervical vertebra, based on the dentocentral synchondrosis. Correlation coefficient analysis and multiple linear regression analysis were used for each parameter of the cervical vertebrae (P < 0.05). Forty-seven of 64 parameters from CBCT-generated CVM (independent variables) exhibited statistically significant correlations (P < 0.05). The multiple regression model with the greatest R (2) had six parameters (PH2/W2, UW2/W2, (OH+AH2)/LW2, UW3/LW3, D3, and H4/W4) as independent variables with a variance inflation factor (VIF) of <2. CBCT-generated CVM was able to include parameters from the second cervical vertebral body and odontoid process, respectively, for the multiple regression models. This suggests that quantitative analysis might be used to estimate skeletal maturation status.

  6. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms.

    PubMed

    Mirkovic, Djordje; Stepanian, Phillip M; Kelly, Jeffrey F; Chilson, Phillip B

    2016-10-20

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.

  7. Radar-derived Quantitative Precipitation Estimation in Complex Terrain over the Eastern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Gou, Y.

    2017-12-01

    Quantitative Precipitation Estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex space time variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3294 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profiles of reflectivity clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method in all precipitation events in terms of score comparison using validation gauge measurements as references, with higher correlation (than 75.74%), lower mean absolute error (than 82.38%) and root-mean-square error (than 89.04%) of all the comparative frames. It is also found that the SCIT-based approach can effectively mitigate the radar QPE local error and represent precipitation spatiotemporal variability better than RT-based scheme.

  8. Influence of echo time in quantitative proton MR spectroscopy using LCModel.

    PubMed

    Yamamoto, Tetsuya; Isobe, Tomonori; Akutsu, Hiroyoshi; Masumoto, Tomohiko; Ando, Hiroki; Sato, Eisuke; Takada, Kenta; Anno, Izumi; Matsumura, Akira

    2015-06-01

    The objective of this study was to elucidate the influence on quantitative analysis using LCModel with the condition of echo time (TE) longer than the recommended values in the spectrum acquisition specifications. A 3T magnetic resonance system was used to perform proton magnetic resonance spectroscopy. The participants were 5 healthy volunteers and 11 patients with glioma. Data were collected at TE of 72, 144 and 288ms. LCModel was used to quantify several metabolites (N-acetylaspartate, creatine and phosphocreatine, and choline-containing compounds). The results were compared with quantitative values obtained by using the T2-corrected internal reference method. In healthy volunteers, when TE was long, the quantitative values obtained using LCModel were up to 6.8-fold larger (p<0.05) than those obtained using the T2-corrected internal reference method. The ratios of the quantitative values obtained by the two methods differed between metabolites (p<0.05). In patients with glioma, the ratios of quantitative values obtained by the two methods tended to be larger at longer TE, similarly to the case of healthy volunteers, and large between-individual variation in the ratios was observed. In clinical practice, TE is sometimes set longer than the value recommended for LCModel. If TE is long, LCModel overestimates the quantitative value since it cannot compensate for signal attenuation, and this effect is different for each metabolite and condition. Therefore, if TE is longer than recommended, it is necessary to account for the possibly reduced reliability of quantitative values calculated using LCModel. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. The use of simulation and multiple environmental tracers to quantify groundwater flow in a shallow aquifer

    USGS Publications Warehouse

    Reilly, Thomas E.; Plummer, Niel; Phillips, Patrick J.; Busenberg, Eurybiades

    1994-01-01

    Measurements of the concentrations of chlorofluorocarbons (CFCs), tritium, and other environmental tracers can be used to calculate recharge ages of shallow groundwater and estimate rates of groundwater movement. Numerical simulation also provides quantitative estimates of flow rates, flow paths, and mixing properties of the groundwater system. The environmental tracer techniques and the hydraulic analyses each contribute to the understanding and quantification of the flow of shallow groundwater. However, when combined, the two methods provide feedback that improves the quantification of the flow system and provides insight into the processes that are the most uncertain. A case study near Locust Grove, Maryland, is used to investigate the utility of combining groundwater age dating, based on CFCs and tritium, and hydraulic analyses using numerical simulation techniques. The results of the feedback between an advective transport model and the estimates of groundwater ages determined by the CFCs improve a quantitative description of the system by refining the system conceptualization and estimating system parameters. The plausible system developed with this feedback between the advective flow model and the CFC ages is further tested using a solute transport simulation to reproduce the observed tritium distribution in the groundwater. The solute transport simulation corroborates the plausible system developed and also indicates that, for the system under investigation with the data obtained from 0.9-m-long (3-foot-long) well screens, the hydrodynamic dispersion is negligible. Together the two methods enable a coherent explanation of the flow paths and rates of movement while indicating weaknesses in the understanding of the system that will require future data collection and conceptual refinement of the groundwater system.

  10. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    PubMed

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  11. Molecular Quantification of Zooplankton Gut Content: The Case For qPCR

    NASA Astrophysics Data System (ADS)

    Frischer, M. E.; Walters, T. L.; Gibson, D. M.; Nejstgaard, J. C.; Troedsson, C.

    2016-02-01

    The ability to obtain information about feeding selectivity and rates in situ for zooplankton is vital for understanding the mechanisms structuring marine ecosystems. However, directly estimating feeding selection and rates of zooplankton, without bias, associated with culturing conditions has been notoriously difficult. A potential approach for addressing this problem is to target prey-specific DNA as a marker for prey ingestion and selection. In this study we report the development of a differential length amplification quantitative PCR (dla-qPCR) assay targeting the 18S rRNA gene to validate the use of a DNA-based approach to quantify consumption of specific plankton prey by the pelagic tunicate (doliolid) Dolioletta gegenbauri. Compared to copepods and other marine animals, the digestion of prey genomic DNA inside the gut of doliolids is low. This method minimizes potential underestimations, and therefore allows prey DNA to be used as an effective indicator of prey consumption. We also present an initial application of a qPCR-assay to estimate consumption of specific prey species on the southeastern continental shelf of the U.S., where doliolids stochastically bloom in response to upwelling events. Estimated feeding rates, based on qPCR, were in the same range as those estimated from clearance rates in laboratory feeding studies. In the field, consumption of specific prey, including the centric diatom Thalassiosira spp. was detected in the gut of wild caught D. gegenbauri at the levels consistent with their abundance in the water column at the time of collection. Thus, both experimental and field investigations support the hypothesis that a qPCR approach will be useful for the quantitative investigation of the in situ diet of D. gegenbauri without introduced bias' associated with cultivation.

  12. Simultaneous estimation of diet composition and calibration coefficients with fatty acid signature data

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.

    2017-01-01

    Knowledge of animal diets provides essential insights into their life history and ecology, although diet estimation is challenging and remains an active area of research. Quantitative fatty acid signature analysis (QFASA) has become a popular method of estimating diet composition, especially for marine species. A primary assumption of QFASA is that constants called calibration coefficients, which account for the differential metabolism of individual fatty acids, are known. In practice, however, calibration coefficients are not known, but rather have been estimated in feeding trials with captive animals of a limited number of model species. The impossibility of verifying the accuracy of feeding trial derived calibration coefficients to estimate the diets of wild animals is a foundational problem with QFASA that has generated considerable criticism. We present a new model that allows simultaneous estimation of diet composition and calibration coefficients based only on fatty acid signature samples from wild predators and potential prey. Our model performed almost flawlessly in four tests with constructed examples, estimating both diet proportions and calibration coefficients with essentially no error. We also applied the model to data from Chukchi Sea polar bears, obtaining diet estimates that were more diverse than estimates conditioned on feeding trial calibration coefficients. Our model avoids bias in diet estimates caused by conditioning on inaccurate calibration coefficients, invalidates the primary criticism of QFASA, eliminates the need to conduct feeding trials solely for diet estimation, and consequently expands the utility of fatty acid data to investigate aspects of ecology linked to animal diets.

  13. Evaluating a multi-criteria model for hazard assessment in urban design. The Porto Marghera case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luria, Paolo; Aspinall, Peter A

    2003-08-01

    The aim of this paper is to describe a new approach to major industrial hazard assessment, which has been recently studied by the authors in conjunction with the Italian Environmental Protection Agency ('ARPAV'). The real opportunity for developing a different approach arose from the need of the Italian EPA to provide the Venice Port Authority with an appropriate estimation of major industrial hazards in Porto Marghera, an industrial estate near Venice (Italy). However, the standard model, the quantitative risk analysis (QRA), only provided a list of individual quantitative risk values, related to single locations. The experimental model is based onmore » a multi-criteria approach--the Analytic Hierarchy Process--which introduces the use of expert opinions, complementary skills and expertise from different disciplines in conjunction with quantitative traditional analysis. This permitted the generation of quantitative data on risk assessment from a series of qualitative assessments, on the present situation and on three other future scenarios, and use of this information as indirect quantitative measures, which could be aggregated for obtaining the global risk rate. This approach is in line with the main concepts proposed by the last European directive on Major Hazard Accidents, which recommends increasing the participation of operators, taking the other players into account and, moreover, paying more attention to the concepts of 'urban control', 'subjective risk' (risk perception) and intangible factors (factors not directly quantifiable)« less

  14. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    PubMed

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  15. 7 CFR 1755.401 - Scope.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., a “measurement” shall be defined as an evaluation where quantitative data is obtained (e.g... evaluation where no quantitative data is obtained (e.g., a check mark indicating conformance is usually the...

  16. 7 CFR 1755.401 - Scope.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., a “measurement” shall be defined as an evaluation where quantitative data is obtained (e.g... evaluation where no quantitative data is obtained (e.g., a check mark indicating conformance is usually the...

  17. 7 CFR 1755.401 - Scope.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., a “measurement” shall be defined as an evaluation where quantitative data is obtained (e.g... evaluation where no quantitative data is obtained (e.g., a check mark indicating conformance is usually the...

  18. 7 CFR 1755.401 - Scope.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., a “measurement” shall be defined as an evaluation where quantitative data is obtained (e.g... evaluation where no quantitative data is obtained (e.g., a check mark indicating conformance is usually the...

  19. 7 CFR 1755.401 - Scope.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., a “measurement” shall be defined as an evaluation where quantitative data is obtained (e.g... evaluation where no quantitative data is obtained (e.g., a check mark indicating conformance is usually the...

  20. Relation between modern pollen rain, vegetation and climate in northern China: Implications for quantitative vegetation reconstruction in a steppe environment.

    PubMed

    Ge, Yawen; Li, Yuecong; Bunting, M Jane; Li, Bing; Li, Zetao; Wang, Junting

    2017-05-15

    Vegetation reconstructions from palaeoecological records depend on adequate understanding of relationships between modern pollen, vegetation and climate. A key parameter for quantitative vegetation reconstructions is the Relative Pollen Productivity (RPP). Differences in both environmental and methodological factors are known to alter the RPP estimated significantly, making it difficult to determine whether the underlying pollen productivity does actually vary, and if so, why. In this paper, we present the results of a replication study for the Bashang steppe region, a typical steppe area in northern China, carried out in 2013 and 2014. In each year, 30 surface samples were collected for pollen analysis, with accompanying vegetation survey using the "Crackles Bequest Project" methodology. Sampling designs differed slightly between the two years: in 2013, sites were located completely randomly, whilst in 2014 sampling locations were constrained to be within a few km of roads. There is a strong inter-annual variability in both the pollen and the vegetation spectra therefore in RPPs, and annual precipitation may be a key influence on these variations. The pollen assemblages in both years are dominated by herbaceous taxa such as Artemisia, Amaranthaceae, Poaceae, Asteraceae, Cyperaceae, Fabaceae and Allium. Artemisia and Amaranthaceae pollen are significantly over-represented for their vegetation abundance. Poaceae, Cyperaceae and Fabaceae seem to have under-represented pollen for vegetation with correspondingly lower RPPs. Asteraceae seems to be well-represented, with moderate RPPs and less annual variation. Estimated Relevant Source Area of Pollen (RSAP) ranges from 2000 to 3000m. Different sampling designs have an effect both on RSAP and RPPs and random sample selection may be the best strategy for obtaining robust estimates. Our results have implications for further pollen-vegetation relationship and quantitative vegetation reconstruction research in typical steppe areas and in other open habitats with strong inter-annual variation. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A meta-analysis of prospective studies of coffee consumption and mortality for all causes, cancers and cardiovascular diseases.

    PubMed

    Malerba, Stefano; Turati, Federica; Galeone, Carlotta; Pelucchi, Claudio; Verga, Federica; La Vecchia, Carlo; Tavani, Alessandra

    2013-07-01

    Several prospective studies considered the relation between coffee consumption and mortality. Most studies, however, were underpowered to detect an association, since they included relatively few deaths. To obtain quantitative overall estimates, we combined all published data from prospective studies on the relation of coffee with mortality for all causes, all cancers, cardiovascular disease (CVD), coronary/ischemic heart disease (CHD/IHD) and stroke. A bibliography search, updated to January 2013, was carried out in PubMed and Embase to identify prospective observational studies providing quantitative estimates on mortality from all causes, cancer, CVD, CHD/IHD or stroke in relation to coffee consumption. A systematic review and meta-analysis was conducted to estimate overall relative risks (RR) and 95 % confidence intervals (CI) using random-effects models. The pooled RRs of all cause mortality for the study-specific highest versus low (≤1 cup/day) coffee drinking categories were 0.88 (95 % CI 0.84-0.93) based on all the 23 studies, and 0.87 (95 % CI 0.82-0.93) for the 19 smoking adjusting studies. The combined RRs for CVD mortality were 0.89 (95 % CI 0.77-1.02, 17 smoking adjusting studies) for the highest versus low drinking and 0.98 (95 % CI 0.95-1.00, 16 studies) for the increment of 1 cup/day. Compared with low drinking, the RRs for the highest consumption of coffee were 0.95 (95 % CI 0.78-1.15, 12 smoking adjusting studies) for CHD/IHD, 0.95 (95 % CI 0.70-1.29, 6 studies) for stroke, and 1.03 (95 % CI 0.97-1.10, 10 studies) for all cancers. This meta-analysis provides quantitative evidence that coffee intake is inversely related to all cause and, probably, CVD mortality.

  2. Quantitation of TGF-beta1 mRNA in porcine mesangial cells by comparative kinetic RT/PCR: comparison with ribonuclease protection assay and in situ hybridization.

    PubMed

    Ceol, M; Forino, M; Gambaro, G; Sauer, U; Schleicher, E D; D'Angelo, A; Anglani, F

    2001-01-01

    Gene expression can be examined with different techniques including ribonuclease protection assay (RPA), in situ hybridisation (ISH), and quantitative reverse transcription-polymerase chain reaction (RT/PCR). These methods differ considerably in their sensitivity and precision in detecting and quantifying low abundance mRNA. Although there is evidence that RT/PCR can be performed in a quantitative manner, the quantitative capacity of this method is generally underestimated. To demonstrate that the comparative kinetic RT/PCR strategy-which uses a housekeeping gene as internal standard-is a quantitative method to detect significant differences in mRNA levels between different samples, the inhibitory effect of heparin on phorbol 12-myristate 13-acetate (PMA)-induced-TGF-beta1 mRNA expression was evaluated by RT/PCR and RPA, the standard method of mRNA quantification, and the results were compared. The reproducibility of RT/PCR amplification was calculated by comparing the quantity of G3PDH and TGF-beta1 PCR products, generated during the exponential phases, estimated from two different RT/PCR (G3PDH, r = 0.968, P = 0.0000; TGF-beta1, r = 0.966, P = 0.0000). The quantitative capacity of comparative kinetic RT/PCR was demonstrated by comparing the results obtained from RPA and RT/PCR using linear regression analysis. Starting from the same RNA extraction, but using only 1% of the RNA for the RT/PCR compared to RPA, significant correlation was observed (r = 0.984, P = 0.0004). Moreover the morphometric analysis of ISH signal was applied for the semi-quantitative evaluation of the expression and localisation of TGF-beta1 mRNA in the entire cell population. Our results demonstrate the close similarity of the RT/PCR and RPA methods in giving quantitative information on mRNA expression and indicate the possibility to adopt the comparative kinetic RT/PCR as reliable quantitative method of mRNA analysis. Copyright 2001 Wiley-Liss, Inc.

  3. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  4. Merging Digital Surface Models Implementing Bayesian Approaches

    NASA Astrophysics Data System (ADS)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  5. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  6. Quantitative Estimate of the Relation Between Rolling Resistance on Fuel Consumption of Class 8 Tractor Trailers Using Both New and Retreaded Tires (SAE Paper 2014-01-2425)

    EPA Science Inventory

    Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.

  7. 77 FR 70727 - Endangered and Threatened Wildlife and Plants; 90-Day Finding on a Petition to List the African...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-27

    ... indirectly through changes in regional climate; and (b) Quantitative research on the relationship of food...). Population Estimates The most quantitative estimate of the historic size of the African lion population... research conducted by Chardonnet et al., three subpopulations were described as consisting of 18 groups...

  8. Useful Ingredients Recovery from Sewage Sludge by using Hydrothermal Reaction

    NASA Astrophysics Data System (ADS)

    Suzuki, Koichi; Moriyama, Mika; Yamasaki, Yuki; Takahashi, Yui; Inoue, Chihiro

    2006-05-01

    Hydrothermal treatment of sludge from a sewage treatment plant was conducted to obtain useful ingredients for culture of specific microbes which can reduce polysulfide ion into sulfide ion and/or hydrogen sulfide. Several additives such as acid, base, and oxidizer were added to the hydrothermal reaction of excess sludge to promote the production of useful materials. After hydrothermal treatment, reaction solution and precipitation were qualitatively and quantitatively analyzed and estimated the availability as nutrition in cultural medium. From the results of product analysis, most of organic solid in sewage was basically decomposed by hydrothermal hydrolysis and transformed into oily or water-soluble compounds. Bacterial culture of sulfate-reducing bacteria (SRB) showed the good results in multiplication with medium which was obtained from hydrothermal treatment of sewage sludge with magnesium or calcium hydroxide and hydrogen peroxide.

  9. On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness

    NASA Astrophysics Data System (ADS)

    Vashkov'yak, M. A.

    2018-01-01

    The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.

  10. Complexity-entropy causality plane: A useful approach for distinguishing songs

    NASA Astrophysics Data System (ADS)

    Ribeiro, Haroldo V.; Zunino, Luciano; Mendes, Renio S.; Lenzi, Ervin K.

    2012-04-01

    Nowadays we are often faced with huge databases resulting from the rapid growth of data storage technologies. This is particularly true when dealing with music databases. In this context, it is essential to have techniques and tools able to discriminate properties from these massive sets. In this work, we report on a statistical analysis of more than ten thousand songs aiming to obtain a complexity hierarchy. Our approach is based on the estimation of the permutation entropy combined with an intensive complexity measure, building up the complexity-entropy causality plane. The results obtained indicate that this representation space is very promising to discriminate songs as well as to allow a relative quantitative comparison among songs. Additionally, we believe that the here-reported method may be applied in practical situations since it is simple, robust and has a fast numerical implementation.

  11. Estimation of sample size and testing power (part 5).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  12. A new mean estimator using auxiliary variables for randomized response models

    NASA Astrophysics Data System (ADS)

    Ozgul, Nilgun; Cingi, Hulya

    2013-10-01

    Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.

  13. Bayesian aggregation versus majority vote in the characterization of non-specific arm pain based on quantitative needle electromyography

    PubMed Central

    2010-01-01

    Background Methods for the calculation and application of quantitative electromyographic (EMG) statistics for the characterization of EMG data detected from forearm muscles of individuals with and without pain associated with repetitive strain injury are presented. Methods A classification procedure using a multi-stage application of Bayesian inference is presented that characterizes a set of motor unit potentials acquired using needle electromyography. The utility of this technique in characterizing EMG data obtained from both normal individuals and those presenting with symptoms of "non-specific arm pain" is explored and validated. The efficacy of the Bayesian technique is compared with simple voting methods. Results The aggregate Bayesian classifier presented is found to perform with accuracy equivalent to that of majority voting on the test data, with an overall accuracy greater than 0.85. Theoretical foundations of the technique are discussed, and are related to the observations found. Conclusions Aggregation of motor unit potential conditional probability distributions estimated using quantitative electromyographic analysis, may be successfully used to perform electrodiagnostic characterization of "non-specific arm pain." It is expected that these techniques will also be able to be applied to other types of electrodiagnostic data. PMID:20156353

  14. Partial Least Squares and Neural Networks for Quantitative Calibration of Laser-induced Breakdown Spectroscopy (LIBs) of Geologic Samples

    NASA Technical Reports Server (NTRS)

    Anderson, R. B.; Morris, Richard V.; Clegg, S. M.; Humphries, S. D.; Wiens, R. C.; Bell, J. F., III; Mertzman, S. A.

    2010-01-01

    The ChemCam instrument [1] on the Mars Science Laboratory (MSL) rover will be used to obtain the chemical composition of surface targets within 7 m of the rover using Laser Induced Breakdown Spectroscopy (LIBS). ChemCam analyzes atomic emission spectra (240-800 nm) from a plasma created by a pulsed Nd:KGW 1067 nm laser. The LIBS spectra can be used in a semiquantitative way to rapidly classify targets (e.g., basalt, andesite, carbonate, sulfate, etc.) and in a quantitative way to estimate their major and minor element chemical compositions. Quantitative chemical analysis from LIBS spectra is complicated by a number of factors, including chemical matrix effects [2]. Recent work has shown promising results using multivariate techniques such as partial least squares (PLS) regression and artificial neural networks (ANN) to predict elemental abundances in samples [e.g. 2-6]. To develop, refine, and evaluate analysis schemes for LIBS spectra of geologic materials, we collected spectra of a diverse set of well-characterized natural geologic samples and are comparing the predictive abilities of PLS, cascade correlation ANN (CC-ANN) and multilayer perceptron ANN (MLP-ANN) analysis procedures.

  15. Rotation elastogram: a novel method to visualize local rigid body rotation under quasi-static compression

    NASA Astrophysics Data System (ADS)

    Sowmiya, C.; Kothawala, Ali Arshad; Thittai, Arun K.

    2016-04-01

    During manual palpation of breast masses, the perception of its stiffness and slipperiness are the two commonly used information by the physician. In order to reliably and quantitatively obtain this information several non-invasive elastography techniques have been developed that seek to provide an image of the underlying mechanical properties, mostly stiffness-related. Very few approaches have visualized the "slip" at the lesion-background boundary that only occurs for a loosely-bonded benign lesion. It has been shown that axial-shear strain distribution provides information about underlying slip. One such feature, referred to as "fill-in" was interpreted as a surrogate of the rotation undergone by an asymmetrically-oriented-loosely bonded-benign-lesion under quasi-static compression. However, imaging and direct visualization of the rotation itself has not been addressed yet. In order to accomplish this, the quality of lateral displacement estimation needs to be improved. In this simulation study, we utilize spatial compounding approach and assess the feasibility to obtain good quality rotation elastogram. The angular axial and lateral displacement estimates were obtained at different insonification angles from a phantom containing an elliptical inclusion oriented at 45°, subjected to 1% compression from the top. A multilevel 2D-block matching algorithm was used for displacement tracking and 2D-least square compounding of angular axial and lateral displacement estimates was employed. By varying the maximum steering angle and incremental angle, the improvement in the lateral motion tracking accuracy and its effects on the quality of rotational elastogram were evaluated. Results demonstrate significantly-improved rotation elastogram using this technique.

  16. Random forests on Hadoop for genome-wide association studies of multivariate neuroimaging phenotypes

    PubMed Central

    2013-01-01

    Motivation Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. Results We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. Availability The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana. PMID:24564704

  17. Random forests on Hadoop for genome-wide association studies of multivariate neuroimaging phenotypes.

    PubMed

    Wang, Yue; Goh, Wilson; Wong, Limsoon; Montana, Giovanni

    2013-01-01

    Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana.

  18. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  19. Quantitative microbial risk assessment model for Legionnaires' disease: assessment of human exposures for selected spa outbreaks.

    PubMed

    Armstrong, Thomas W; Haas, Charles N

    2007-08-01

    Evaluation of a quantitative microbial risk assessment (QMRA) model for Legionnaires' disease (LD) required Legionella exposure estimates for several well-documented LD outbreaks. Reports for a whirlpool spa and two natural spring spa outbreaks provided data for the exposure assessment, as well as rates of infection and mortality. Exposure estimates for the whirlpool spa outbreak employed aerosol generation, water composition, exposure duration data, and building ventilation parameters with a two-zone model. Estimates for the natural hot springs outbreaks used bacterial water to air partitioning coefficients and exposure duration information. The air concentration and dose calculations used input parameter distributions with Monte Carlo simulations to estimate exposures as probability distributions. The assessment considered two sets of assumptions about the transfer of Legionella from the water phase to the aerosol emitted from the whirlpool spa. The estimated air concentration near the whirlpool spa was 5 to 18 colony forming units per cubic meter (CFU/m(3)) and 50 to 180 CFU/m(3) for each of the alternate assumptions. The estimated 95th percentile ranges of Legionella dose for workers within 15 m of the whirlpool spa were 0.13-3.4 CFU and 1.3-34.5 CFU, respectively. The modeling for hot springs Spas 1 and 2 resulted in estimated arithmetic mean air concentrations of 360 and 17 CFU/m(3), respectively, and 95 percentile ranges for Legionella dose of 28 to 67 CFU and 1.1 to 3.7 CFU, respectively. The Legionella air concentration estimates fall in the range of limited reports on air concentrations of Legionella (0.33 to 190 CFU/m(3)) near showers, aerated faucets, and baths during filling with Legionella-contaminated water. These measurements may provide some indication that the estimates are of a reasonable magnitude, but they do not clarify the exposure estimates accuracy, since they were not obtained during LD outbreaks. Further research to improve the data used for the Legionella exposure assessment would strengthen the results. Several of the primary additional data needs include improved data for bacterial water to air partitioning coefficients, better accounting of time-activity-distance patterns and exposure potential in outbreak reports, and data for Legionella-containing aerosol viability decay instead of loss of capability for growth in culture.

  20. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were linear in the shift for both the QSPECT and QPlanar methods. QPlanar was less sensitive to object definition perturbations than QSPECT, especially for dilation and erosion cases. Up to 1 voxel misregistration or misdefinition resulted in up to 8% error in organ activity estimates, with the largest errors for small or low uptake organs. Both types of VOI definition errors produced larger errors in activity estimates for a small and low uptake organs (i.e. -7.5% to 5.3% for the left kidney) than for a large and high uptake organ (i.e. -2.9% to 2.1% for the liver). We observed that misregistration generally had larger effects than misdefinition, with errors ranging from -7.2% to 8.4%. The different imaging methods evaluated responded differently to the errors from misregistration and misdefinition. We found that QSPECT was more sensitive to misdefinition errors, but less sensitive to misregistration errors, as compared to the QPlanar method. Thus, sensitivity to VOI definition errors should be an important criterion in evaluating quantitative imaging methods.

  1. Physiological frailty index (PFI): quantitative in-life estimate of individual biological age in mice.

    PubMed

    Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V

    2017-03-19

    The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.

  2. Ex vivo validation of photo-magnetic imaging.

    PubMed

    Luk, Alex; Nouizi, Farouk; Erkol, Hakan; Unlu, Mehmet B; Gulsen, Gultekin

    2017-10-15

    We recently introduced a new high-resolution diffuse optical imaging technique termed photo-magnetic imaging (PMI), which utilizes magnetic resonance thermometry (MRT) to monitor the 3D temperature distribution induced in a medium illuminated with a near-infrared light. The spatiotemporal temperature distribution due to light absorption can be accurately estimated using a combined photon propagation and heat diffusion model. High-resolution optical absorption images are then obtained by iteratively minimizing the error between the measured and modeled temperature distributions. We have previously demonstrated the feasibility of PMI with experimental studies using tissue simulating agarose phantoms. In this Letter, we present the preliminary ex vivo PMI results obtained with a chicken breast sample. Similarly to the results obtained on phantoms, the reconstructed images reveal that PMI can quantitatively resolve an inclusion with a 3 mm diameter embedded deep in a biological tissue sample with only 10% error. These encouraging results demonstrate the high performance of PMI in ex vivo biological tissue and its potential for in vivo imaging.

  3. [The development of a computer model in the quantitative assessment of thallium-201 myocardial scintigraphy].

    PubMed

    Raineri, M; Traina, M; Rotolo, A; Candela, B; Lombardo, R M; Raineri, A A

    1993-05-01

    Thallium-201 scintigraphy is a widely used noninvasive procedure for the detection and prognostic assessment of patients with suspected or proven coronary artery disease. Thallium uptake can be evaluated by a visual analysis or by a quantitative interpretation. Quantitative scintigraphy enhances disease detection in individual coronary arteries, provides a more precise estimate of the amount of ischemic myocardium, distinguishing scar from hypoperfused tissue. Due to the great deal of data, analysis, interpretation and comparison of thallium uptake can be very complex. We designed a computer-based system for the interpretation of quantitative thallium-201 scintigraphy data uptake. We used a database (DataEase 4.2-DataEase Italia). Our software has the following functions: data storage; calculation; conversion of numerical data into different definitions classifying myocardial perfusion; uptake data comparison; automatic conclusion; comparison of different scintigrams for the same patient. Our software is made up by 4 sections: numeric analysis, descriptive analysis, automatic conclusion, clinical remarks. We introduced in the computer system appropriate information, "logical paths", that use the "IF ... THEN" rules. The software executes these rules in order to analyze the myocardial regions in the 3 phases of scintigraphic analysis (stress, redistribution, re-injection), in the 3 projections (LAO 45 degrees, LAT,ANT), considering our uptake cutoff, obtaining, finally, the automatic conclusions. For these reasons, our computer-based system could be considered a real "expert system".

  4. Monitoring of Cr, Cu, Pb, V and Zn in polluted soils by laser induced breakdown spectroscopy (LIBS).

    PubMed

    Dell'Aglio, Marcella; Gaudiuso, Rosalba; Senesi, Giorgio S; De Giacomo, Alessandro; Zaccone, Claudio; Miano, Teodoro M; De Pascale, Olga

    2011-05-01

    Laser Induced Breakdown Spectroscopy (LIBS) is a fast and multi-elemental analytical technique particularly suitable for the qualitative and quantitative analysis of heavy metals in solid samples, including environmental ones. Although LIBS is often recognised in the literature as a well-established analytical technique, results about quantitative analysis of elements in chemically complex matrices such as soils are quite contrasting. In this work, soil samples of various origins have been analyzed by LIBS and data compared to those obtained by Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES). The emission intensities of one selected line for each of the five analytes (i.e., Cr, Cu, Pb, V, and Zn) were normalized to the background signal, and plotted as a function of the concentration values previously determined by ICP-OES. Data showed a good linearity for all calibration lines drawn, and the correlation between ICP-OES and LIBS was confirmed by the satisfactory agreement obtained between the corresponding values. Consequently, LIBS method can be used at least for metal monitoring in soils. In this respect, a simple method for the estimation of the soil pollution degree by heavy metals, based on the determination of an anthropogenic index, was proposed and determined for Cr and Zn.

  5. Elemental analysis of scorpion venoms.

    PubMed

    Al-Asmari, AbdulRahman K; Kunnathodi, Faisal; Al Saadon, Khalid; Idris, Mohammed M

    2016-01-01

    Scorpion venom is a rich source of biomolecules, which can perturb physiological activity of the host on envenomation and may also have a therapeutic potential. Scorpion venoms produced by the columnar cells of venom gland are complex mixture of mucopolysaccharides, neurotoxic peptides and other components. This study was aimed at cataloguing the elemental composition of venoms obtained from medically important scorpions found in the Arabian peninsula. The global elemental composition of the crude venom obtained from Androctonus bicolor, Androctonus crassicauda and Leiurus quinquestriatus scorpions were estimated using ICP-MS analyzer. The study catalogued several chemical elements present in the scorpion venom using ICP-MS total quant analysis and quantitation of nine elements exclusively using appropriate standards. Fifteen chemical elements including sodium, potassium and calcium were found abundantly in the scorpion venom at PPM concentrations. Thirty six chemical elements of different mass ranges were detected in the venom at PPB level. Quantitative analysis of the venoms revealed copper to be the most abundant element in Androctonus sp. venom but at lower level in Leiurus quinquestriatus venom; whereas zinc and manganese was found at higher levels in Leiurus sp. venom but at lower level in Androctonus sp. venom. These data and the concentrations of other different elements present in the various venoms are likely to increase our understanding of the mechanisms of venom activity and their pharmacological potentials.

  6. An Overview of Quantitative Risk Assessment of Space Shuttle Propulsion Elements

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    1998-01-01

    Since the Space Shuttle Challenger accident in 1986, NASA has been working to incorporate quantitative risk assessment (QRA) in decisions concerning the Space Shuttle and other NASA projects. One current major NASA QRA study is the creation of a risk model for the overall Space Shuttle system. The model is intended to provide a tool to estimate Space Shuttle risk and to perform sensitivity analyses/trade studies, including the evaluation of upgrades. Marshall Space Flight Center (MSFC) is a part of the NASA team conducting the QRA study; MSFC responsibility involves modeling the propulsion elements of the Space Shuttle, namely: the External Tank (ET), the Solid Rocket Booster (SRB), the Reusable Solid Rocket Motor (RSRM), and the Space Shuttle Main Engine (SSME). This paper discusses the approach that MSFC has used to model its Space Shuttle elements, including insights obtained from this experience in modeling large scale, highly complex systems with a varying availability of success/failure data. Insights, which are applicable to any QRA study, pertain to organizing the modeling effort, obtaining customer buy-in, preparing documentation, and using varied modeling methods and data sources. Also provided is an overall evaluation of the study results, including the strengths and the limitations of the MSFC QRA approach and of qRA technology in general.

  7. Cardiovascular Outcomes and the Physical and Chemical Properties of Metal Ions Found in Particulate Matter Air Pollution: A QICAR Study

    PubMed Central

    Meng, Qingyu; Lu, Shou-En; Buckley, Barbara; Welsh, William J.; Whitsel, Eric A.; Hanna, Adel; Yeatts, Karin B.; Warren, Joshua; Herring, Amy H.; Xiu, Aijun

    2013-01-01

    Background: This paper presents an application of quantitative ion character–activity relationships (QICAR) to estimate associations of human cardiovascular (CV) diseases (CVDs) with a set of metal ion properties commonly observed in ambient air pollutants. QICAR has previously been used to predict ecotoxicity of inorganic metal ions based on ion properties. Objectives: The objective of this work was to examine potential associations of biological end points with a set of physical and chemical properties describing inorganic metal ions present in exposures using QICAR. Methods: Chemical and physical properties of 17 metal ions were obtained from peer-reviewed publications. Associations of cardiac arrhythmia, myocardial ischemia, myocardial infarction, stroke, and thrombosis with exposures to metal ions (measured as inference scores) were obtained from the Comparative Toxicogenomics Database (CTD). Robust regressions were applied to estimate the associations of CVDs with ion properties. Results: CVD was statistically significantly associated (Bonferroni-adjusted significance level of 0.003) with many ion properties reflecting ion size, solubility, oxidation potential, and abilities to form covalent and ionic bonds. The properties are relevant for reactive oxygen species (ROS) generation, which has been identified as a possible mechanism leading to CVDs. Conclusion: QICAR has the potential to complement existing epidemiologic methods for estimating associations between CVDs and air pollutant exposures by providing clues about the underlying mechanisms that may explain these associations. PMID:23462649

  8. Multiparametric Quantitative Ultrasound Imaging in Assessment of Chronic Kidney Disease.

    PubMed

    Gao, Jing; Perlman, Alan; Kalache, Safa; Berman, Nathaniel; Seshan, Surya; Salvatore, Steven; Smith, Lindsey; Wehrli, Natasha; Waldron, Levi; Kodali, Hanish; Chevalier, James

    2017-11-01

    To evaluate the value of multiparametric quantitative ultrasound imaging in assessing chronic kidney disease (CKD) using kidney biopsy pathologic findings as reference standards. We prospectively measured multiparametric quantitative ultrasound markers with grayscale, spectral Doppler, and acoustic radiation force impulse imaging in 25 patients with CKD before kidney biopsy and 10 healthy volunteers. Based on all pathologic (glomerulosclerosis, interstitial fibrosis/tubular atrophy, arteriosclerosis, and edema) scores, the patients with CKD were classified into mild (no grade 3 and <2 of grade 2) and moderate to severe (at least 2 of grade 2 or 1 of grade 3) CKD groups. Multiparametric quantitative ultrasound parameters included kidney length, cortical thickness, pixel intensity, parenchymal shear wave velocity, intrarenal artery peak systolic velocity (PSV), end-diastolic velocity (EDV), and resistive index. We tested the difference in quantitative ultrasound parameters among mild CKD, moderate to severe CKD, and healthy controls using analysis of variance, analyzed correlations of quantitative ultrasound parameters with pathologic scores and the estimated glomerular filtration rate (GFR) using Pearson correlation coefficients, and examined the diagnostic performance of quantitative ultrasound parameters in determining moderate CKD and an estimated GFR of less than 60 mL/min/1.73 m 2 using receiver operating characteristic curve analysis. There were significant differences in cortical thickness, pixel intensity, PSV, and EDV among the 3 groups (all P < .01). Among quantitative ultrasound parameters, the top areas under the receiver operating characteristic curves for PSV and EDV were 0.88 and 0.97, respectively, for determining pathologic moderate to severe CKD, and 0.76 and 0.86 for estimated GFR of less than 60 mL/min/1.73 m 2 . Moderate to good correlations were found for PSV, EDV, and pixel intensity with pathologic scores and estimated GFR. The PSV, EDV, and pixel intensity are valuable in determining moderate to severe CKD. The value of shear wave velocity in assessing CKD needs further investigation. © 2017 by the American Institute of Ultrasound in Medicine.

  9. On the Agreement between Manual and Automated Methods for Single-Trial Detection and Estimation of Features from Event-Related Potentials

    PubMed Central

    Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.

    2015-01-01

    The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532

  10. An algorithm for the estimation of the signal-to-noise ratio in surface myoelectric signals generated during cyclic movements.

    PubMed

    Agostini, Valentina; Knaflitz, Marco

    2012-01-01

    In many applications requiring the study of the surface myoelectric signal (SMES) acquired in dynamic conditions, it is essential to have a quantitative evaluation of the quality of the collected signals. When the activation pattern of a muscle has to be obtained by means of single- or double-threshold statistical detectors, the background noise level e (noise) of the signal is a necessary input parameter. Moreover, the detection strategy of double-threshold detectors may be properly tuned when the SNR and the duty cycle (DC) of the signal are known. The aim of this paper is to present an algorithm for the estimation of e (noise), SNR, and DC of an SMES collected during cyclic movements. The algorithm is validated on synthetic signals with statistical properties similar to those of SMES, as well as on more than 100 real signals. © 2011 IEEE

  11. Classical conditioning through auditory stimuli in Drosophila: methods and models

    PubMed Central

    Menda, Gil; Bar, Haim Y.; Arthur, Ben J.; Rivlin, Patricia K.; Wyttenbach, Robert A.; Strawderman, Robert L.; Hoy, Ronald R.

    2011-01-01

    SUMMARY The role of sound in Drosophila melanogaster courtship, along with its perception via the antennae, is well established, as is the ability of this fly to learn in classical conditioning protocols. Here, we demonstrate that a neutral acoustic stimulus paired with a sucrose reward can be used to condition the proboscis-extension reflex, part of normal feeding behavior. This appetitive conditioning produces results comparable to those obtained with chemical stimuli in aversive conditioning protocols. We applied a logistic model with general estimating equations to predict the dynamics of learning, which successfully predicts the outcome of training and provides a quantitative estimate of the rate of learning. Use of acoustic stimuli with appetitive conditioning provides both an alternative to models most commonly used in studies of learning and memory in Drosophila and a means of testing hearing in both sexes, independently of courtship responsiveness. PMID:21832129

  12. Solar-flare-induced Forbush decreases - Dependence on shock wave geometry

    NASA Technical Reports Server (NTRS)

    Thomas, B. T.; Gall, R.

    1984-01-01

    It is argued that the principal mechanism for the association of Forbush decreases with the passage of a solar flare shock wave is prolonged containment of cosmic ray particles behind the flare compression region, which acts as a semipermeable obstacle to particle motion along the field lines, leading to additional adiabatic cooling of the particles. Liouville's theorem is used to calculate the instantaneous distribution function at 1 AU for each particle arriving at the earth. By averaging over a large number of individual estimates, a representative estimate of the omnidirectional phase space density and the corresponding particle intensity is obtained. The energy change of individual particles at the shocks is found to be small in comparison to the energy lost by adiabatic cooling of the cosmic rays between the shock wave and the sun. The effects of particle rigidity, diffusion coefficient, and flare longitude on the magnitude of the Forbush decrease are quantitatively investigated.

  13. Estimation and harvesting of human heat power for wearable electronic devices

    NASA Astrophysics Data System (ADS)

    Dziurdzia, P.; Brzozowski, I.; Bratek, P.; Gelmuda, W.; Kos, A.

    2016-01-01

    The paper deals with the issue of self-powered wearable electronic devices that are capable of harvesting free available energy dissipated by the user in the form of human heat. The free energy source is intended to be used as a secondary power source supporting primary battery in a sensor bracelet. The main scope of the article is a presentation of the concept for a measuring setup used to quantitative estimation of heat power sources in different locations over the human body area. The crucial role in the measurements of the human heat plays a thermoelectric module working in the open circuit mode. The results obtained during practical tests are confronted with the requirements of the dedicated thermoelectric generator. A prototype design of a human warmth energy harvester with an ultra-low power DC-DC converter based on the LTC3108 circuit is analysed.

  14. NIRS-SPM: statistical parametric mapping for near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Tak, Sungho; Jang, Kwang Eun; Jung, Jinwook; Jang, Jaeduck; Jeong, Yong; Ye, Jong Chul

    2008-02-01

    Even though there exists a powerful statistical parametric mapping (SPM) tool for fMRI, similar public domain tools are not available for near infrared spectroscopy (NIRS). In this paper, we describe a new public domain statistical toolbox called NIRS-SPM for quantitative analysis of NIRS signals. Specifically, NIRS-SPM statistically analyzes the NIRS data using GLM and makes inference as the excursion probability which comes from the random field that are interpolated from the sparse measurement. In order to obtain correct inference, NIRS-SPM offers the pre-coloring and pre-whitening method for temporal correlation estimation. For simultaneous recording NIRS signal with fMRI, the spatial mapping between fMRI image and real coordinate in 3-D digitizer is estimated using Horn's algorithm. These powerful tools allows us the super-resolution localization of the brain activation which is not possible using the conventional NIRS analysis tools.

  15. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  16. Spectral F-test power evaluation in the EEG during intermittent photic stimulation.

    PubMed

    de Sá, Antonio Mauricio F L Miranda; Cagy, Mauricio; Lazarev, Vladimir V; Infantosi, Antonio Fernando C

    2006-06-01

    Intermittent photic stimulation (IPS) is an important functional test, which can induce the photic driving in the electroencephalogram (EEG). It is capable of enhancing latent oscillations manifestations not present in the resting EEG. However, for adequate quantitative evaluation of the photic driving, these changes should be assessed on a statistical basis. With this aim, the sampling distribution of spectral F test was investigated. On this basis, confidence limits of the SFT-estimate could be obtained for different practical situations, in which the signal-to-noise ratio and the number of epochs used in the estimation may vary. The technique was applied to the EEG of 10 normal subjects during IPS, and allowed detecting responses not only at the fundamental IPS frequency but also at higher harmonics. It also permitted to assess the strength of the photic driving responses and to compare them in different derivations and in different subjects.

  17. A fault tree model to assess probability of contaminant discharge from shipwrecks.

    PubMed

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I

    2014-11-15

    Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Electron Affinity of Phenyl-C61-Butyric Acid Methyl Ester (PCBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Bryon W.; Whitaker, James B.; Wang, Xue B.

    2013-07-25

    The gas-phase electron affinity (EA) of phenyl-C61-butyric acid methyl ester (PCBM), one of the best-performing electron acceptors in organic photovoltaic devices, is measured by lowtemperature photoelectron spectroscopy for the first time. The obtained value of 2.63(1) eV is only ca. 0.05 eV lower than that of C60 (2.68(1) eV), compared to a 0.09 V difference in their E1/2 values measured in this work by cyclic voltammetry. Literature E(LUMO) values for PCBM that are typically estimated from cyclic voltammetry, and commonly used as a quantitative measure of acceptor properties, are dispersed over a wide range between -4.3 and -3.62 eV; themore » reasons for such a huge discrepancy are analyzed here, and the protocol for reliable and consistent estimations of relative fullerene-based acceptor strength in solution is proposed.« less

  19. The quantitative genetics of maximal and basal rates of oxygen consumption in mice.

    PubMed Central

    Dohm, M R; Hayes, J P; Garland, T

    2001-01-01

    A positive genetic correlation between basal metabolic rate (BMR) and maximal (VO(2)max) rate of oxygen consumption is a key assumption of the aerobic capacity model for the evolution of endothermy. We estimated the genetic (V(A), additive, and V(D), dominance), prenatal (V(N)), and postnatal common environmental (V(C)) contributions to individual differences in metabolic rates and body mass for a genetically heterogeneous laboratory strain of house mice (Mus domesticus). Our breeding design did not allow the simultaneous estimation of V(D) and V(N). Regardless of whether V(D) or V(N) was assumed, estimates of V(A) were negative under the full models. Hence, we fitted reduced models (e.g., V(A) + V(N) + V(E) or V(A) + V(E)) and obtained new variance estimates. For reduced models, narrow-sense heritability (h(2)(N)) for BMR was <0.1, but estimates of h(2)(N) for VO(2)max were higher. When estimated with the V(A) + V(E) model, the additive genetic covariance between VO(2)max and BMR was positive and statistically different from zero. This result offers tentative support for the aerobic capacity model for the evolution of vertebrate energetics. However, constraints imposed on the genetic model may cause our estimates of additive variance and covariance to be biased, so our results should be interpreted with caution and tested via selection experiments. PMID:11560903

  20. An assessment of the reliability of quantitative genetics estimates in study systems with high rate of extra-pair reproduction and low recruitment.

    PubMed

    Bourret, A; Garant, D

    2017-03-01

    Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.

  1. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  2. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  3. The impact of temporal sampling resolution on parameter inference for biological transport models.

    PubMed

    Harrison, Jonathan U; Baker, Ruth E

    2018-06-25

    Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.

  4. Interlaboratory comparison of Taq Nuclease Assays for the quantification of the toxic cyanobacteria Microcystis sp

    PubMed Central

    Schober, Eva; Werndl, Michael; Laakso, Kati; Korschineck, Irina; Sivonen, Kaarina; Kurmayer, Rainer

    2011-01-01

    Summary The application of quantitative real time PCR has been proposed for the quantification of toxic genotypes of cyanobacteria. We have compared the Taq Nuclease Assay (TNA) in quantifying the toxic cyanobacteria Microcystis sp. via the intergenic spacer region of the phycocyanin operon (PC) and mcyB indicative of the production of the toxic heptapeptide microcystin between three research groups employing three instruments (ABI7300, GeneAmp5700, ABI7500). The estimates of mcyB genotypes were compared using (i) DNA of a mcyB containing strain and a non-mcyB containing strain supplied in different mixtures across a low range of variation (0-10% of mcyB) and across a high range of variation (20-100%), and (ii) DNA from field samples containing Microcystis sp. For all three instruments highly significant linear regression curves between the proportion of the mcyB containing strain and the percentage of mcyB genotypes both within the low range and within the high range of mcyB variation were obtained. The regression curves derived from the three instruments differed in slope and within the high range of mcyB variation mcyB proportions were either underestimated (0-50%) or overestimated (0-72%). For field samples cell numbers estimated via both TNAs as well as mcyB proportions showed significant linear relationships between the instruments. For all instruments a linear relationship between the cell numbers estimated as PC genotypes and the cell numbers estimated as mcyB genotypes was observed. The proportions of mcyB varied from 2-28% and did not differ between the instruments. It is concluded that the TNA is able to provide quantitative estimates on mcyB genotype numbers that are reproducible between research groups and is useful to follow variation in mcyB genotype proportion occurring within weeks to months. PMID:17258828

  5. Kinetic Monte Carlo simulations of the effect of the exchange control layer thickness in CoPtCrB/CoPtCrSiO granular media

    NASA Astrophysics Data System (ADS)

    Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.

    2018-05-01

    A hybrid Landau Lifshitz Gilbert/kinetic Monte Carlo algorithm is used to simulate experimental magnetic hysteresis loops for dual layer exchange coupled composite media. The calculation of the rate coefficients and difficulties arising from low energy barriers, a fundamental problem of the kinetic Monte Carlo method, are discussed and the methodology used to treat them in the present work is described. The results from simulations are compared with experimental vibrating sample magnetometer measurements on dual layer CoPtCrB/CoPtCrSiO media and a quantitative relationship between the thickness of the exchange control layer separating the layers and the effective exchange constant between the layers is obtained. Estimates of the energy barriers separating magnetically reversed states of the individual grains in zero applied field as well as the saturation field at sweep rates relevant to the bit write speeds in magnetic recording are also presented. The significance of this comparison between simulations and experiment and the estimates of the material parameters obtained from it are discussed in relation to optimizing the performance of magnetic storage media.

  6. Rates of anterior tooth wear in Middle Pleistocene hominins from Sima de los Huesos (Sierra de Atapuerca, Spain).

    PubMed

    Bermúdez de Castro, J M; Martinón-Torres, M; Sarmiento, S; Lozano, M; Arsuaga, J L; Carbonell, E

    2003-10-14

    This study presents quantitative data on the rates of anterior tooth wear in a Pleistocene human population. The data were obtained for the hominin sample of the Sima de los Huesos site in Atapuerca, Spain. The fossil record belongs to a minimum of 28 individuals of the same biological population, assigned to the species Homo heidelbergensis. We have estimated the original and the preserved crown height of the mandibular incisors (I1 and I2) of 11 individuals, whose age at death can be ascertained from the mineralization stage and tooth eruption. Results provide a range of 0.276-0.348 and 0.288-0.360 mm per year for the mean wear rate of the mandibular I1 and I2, respectively, in individuals approximately 16-18 years old. These data suggest that incisors' crowns would be totally worn out toward the fifth decade of life. Thus, we expect the life expectancy of this population to be seriously limited. These data, which could be contrasted with results obtained on hominins at other sites, could be of interest for estimating the death age of adult individuals.

  7. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  8. Binding free energy predictions of farnesoid X receptor (FXR) agonists using a linear interaction energy (LIE) approach with reliability estimation: application to the D3R Grand Challenge 2

    NASA Astrophysics Data System (ADS)

    Rifai, Eko Aditya; van Dijk, Marc; Vermeulen, Nico P. E.; Geerke, Daan P.

    2018-01-01

    Computational protein binding affinity prediction can play an important role in drug research but performing efficient and accurate binding free energy calculations is still challenging. In the context of phase 2 of the Drug Design Data Resource (D3R) Grand Challenge 2 we used our automated eTOX ALLIES approach to apply the (iterative) linear interaction energy (LIE) method and we evaluated its performance in predicting binding affinities for farnesoid X receptor (FXR) agonists. Efficiency was obtained by our pre-calibrated LIE models and molecular dynamics (MD) simulations at the nanosecond scale, while predictive accuracy was obtained for a small subset of compounds. Using our recently introduced reliability estimation metrics, we could classify predictions with higher confidence by featuring an applicability domain (AD) analysis in combination with protein-ligand interaction profiling. The outcomes of and agreement between our AD and interaction-profile analyses to distinguish and rationalize the performance of our predictions highlighted the relevance of sufficiently exploring protein-ligand interactions during training and it demonstrated the possibility to quantitatively and efficiently evaluate if this is achieved by using simulation data only.

  9. Rates of anterior tooth wear in Middle Pleistocene hominins from Sima de los Huesos (Sierra de Atapuerca, Spain)

    PubMed Central

    de Castro, J. M. Bermúdez; Martinón-Torres, M.; Sarmiento, S.; Lozano, M.; Arsuaga, J. L.; Carbonell, E.

    2003-01-01

    This study presents quantitative data on the rates of anterior tooth wear in a Pleistocene human population. The data were obtained for the hominin sample of the Sima de los Huesos site in Atapuerca, Spain. The fossil record belongs to a minimum of 28 individuals of the same biological population, assigned to the species Homo heidelbergensis. We have estimated the original and the preserved crown height of the mandibular incisors (I1 and I2) of 11 individuals, whose age at death can be ascertained from the mineralization stage and tooth eruption. Results provide a range of 0.276–0.348 and 0.288–0.360 mm per year for the mean wear rate of the mandibular I1 and I2, respectively, in individuals ≈16–18 years old. These data suggest that incisors' crowns would be totally worn out toward the fifth decade of life. Thus, we expect the life expectancy of this population to be seriously limited. These data, which could be contrasted with results obtained on hominins at other sites, could be of interest for estimating the death age of adult individuals. PMID:14528001

  10. Measurement of lung function using Electrical Impedance Tomography (EIT) during mechanical ventilation

    NASA Astrophysics Data System (ADS)

    Nebuya, Satoru; Koike, Tomotaka; Imai, Hiroshi; Noshiro, Makoto; Brown, Brian H.; Soma, Kazui

    2010-04-01

    The consistency of regional lung density measurements as estimated by Electrical Impedance Tomography (EIT), in eleven patients supported by a mechanical ventilator, was validated to verify the feasibility of its use in intensive care medicine. There were significant differences in regional lung densities between the normal lung and diseased lungs associated with pneumonia, atelectasis and pleural effusion (Steel-Dwass test, p < 0.05). Temporal changes in regional lung density of patients with atelectasis were observed to be in good agreement with the results of clinical diagnosis. These results indicate that it is feasible to obtain a quantitative value for regional lung density using EIT.

  11. Etalon (standard) for surface potential distribution produced by electric activity of the heart.

    PubMed

    Szathmáry, V; Ruttkay-Nedecký, I

    1981-01-01

    The authors submit etalon (standard) equipotential maps as an aid in the evaluation of maps of surface potential distributions in living subjects. They were obtained by measuring potentials on the surface of an electrolytic tank shaped like the thorax. The individual etalon maps were determined in such a way that the parameters of the physical dipole forming the source of the electric field in the tank corresponded to the mean vectorcardiographic parameters measured in a healthy population sample. The technique also allows a quantitative estimate of the degree of non-dipolarity of the heart as the source of the electric field.

  12. Modified-hypernetted-chain determination of the phase diagram of rigid C60 molecules

    NASA Astrophysics Data System (ADS)

    Caccamo, C.

    1995-02-01

    The modified-hypernetted-chain theory is applied to the determination of the phase diagram of the Lennard-Jones (LJ) fluid, and of a model of C60 previously investigated [Phys. Rev. Lett. 71, 1200 (1993)] through molecular-dynamics (MD) simulation and a different theoretical approach. In the LJ case the agreement with available MD data is quantitative and superior to other theories. For C60, the phase diagram obtained is in quite good agreement with previous MD results: in particular, the theory confirms the existence of a liquid phase between 1600 and 1920 K, the estimated triple point and critical temperature, respectively.

  13. Estimation of masonry mechanical characteristics by ESPI fringe interpretation

    NASA Astrophysics Data System (ADS)

    Facchini, M.; Zanetta, P.; Binda, L.; Roberti, G. Mirabella; Tiraboschi, C.

    Electronic speckle pattern interferometry (ESPI) can be a powerful tool for efficient non-destructive testing and evaluation of micro-deformations of masonry materials and structures. Unlike traditional transducers, ESPI requires no direct contact with the object, and the full-field visualisation it offers provides for a better understanding of the surface behaviour. This paper describes an in-plane deformation inspection system which has been built up for an automatic acquisition of interferograms at different stages of a test. The system is applied to the evaluation of some mechanical characteristics of masonry components. Qualitative and quantitative results are obtained and an overall discussion is presented.

  14. Prediction of hydrocarbons in sedimentary basins

    USGS Publications Warehouse

    Harff, J.E.; Davis, J.C.; Eiserbeck, W.

    1993-01-01

    To estimate the undiscovered hydrocarbon potential of sedimentary basins, quantitative play assessments specific for each location in a region may be obtained using geostatistical methods combined with the theory of classification of geological objects, a methodology referred to as regionalization. The technique relies on process modeling and measured borehole data as well as probabilistic methods to exploit the relationship between geology (the "predictor") and known hydrocarbon productivity (the "target") to define prospective stratigraphic intervals within a basin. It is demonstrated in case studies from the oil-producing region of the western Kansas Pennsylvanian Shelf and the gas-bearing Rotliegend sediments of the Northeast German Basin. ?? 1993 International Association for Mathematical Geology.

  15. Remote measurements of the atmosphere using Raman scattering.

    PubMed

    Melfi, S H

    1972-07-01

    The Raman optical radar measurements of the atmosphere presented demonstrate that the technique may be used to obtain quantitative measurements of the spatial distribution of individual atmospheric molecular trace constituents, in particular water vapor, as well as those of the major constituents. In addition, it is shown that monitoring Raman signals from atmospheric nitrogen aids in interpreting elastic scattering measurements by eliminating attenuation effects. In general, the experimental results show good agreement with independent meteorological measurements. Finally, experimental data are utilized to estimate the Raman backscatter cross section for water vapor excited at 3471.5 A as sigmaH(2)O/sigmaN(2) = 3.8 +/- 25%.

  16. Properties of O dwarf stars in 30 Doradus

    NASA Astrophysics Data System (ADS)

    Sabín-Sanjulián, Carolina; VFTS Collaboration

    2017-11-01

    We perform a quantitative spectroscopic analysis of 105 presumably single O dwarf stars in 30 Doradus, located within the Large Magellanic Cloud. We use mid-to-high resolution multi-epoch optical spectroscopic data obtained within the VLT-FLAMES Tarantula Survey. Stellar and wind parameters are derived by means of the automatic tool iacob-gbat, which is based on a large grid of fastwind models. We also benefit from the Bayesian tool bonnsai to estimate evolutionary masses. We provide a spectral calibration for the effective temperature of O dwarf stars in the LMC, deal with the mass discrepancy problem and investigate the wind properties of the sample.

  17. Growth of group II-VI semiconductor quantum dots with strong quantum confinement and low size dispersion

    NASA Astrophysics Data System (ADS)

    Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2003-11-01

    CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (

  18. On the Heating of Ions in Noncylindrical Z-Pinches

    NASA Astrophysics Data System (ADS)

    Svirsky, E. B.

    2018-01-01

    The method proposed here for analyzing processes in a hot plasma of noncylindrical Z-pinches is based on separation of the group of high-energy ions into a special fraction. Such ions constitute an insignificant fraction ( 10%) of the total volume of the Z-pinch plasma, but these ions contribute the most to the formation of conditions in which the pinch becomes a source of nuclear fusion products and X-ray radiation. The method allows a quite correct approach to obtaining quantitative estimates of the plasma parameters, the nuclear fusion energy yield, and the features of neutron fluxes in experiments with Z-pinches.

  19. Quantitative background parenchymal uptake on molecular breast imaging and breast cancer risk: a case-control study.

    PubMed

    Hruska, Carrie B; Geske, Jennifer R; Swanson, Tiffinee N; Mammel, Alyssa N; Lake, David S; Manduca, Armando; Conners, Amy Lynn; Whaley, Dana H; Scott, Christopher G; Carter, Rickey E; Rhodes, Deborah J; O'Connor, Michael K; Vachon, Celine M

    2018-06-05

    Background parenchymal uptake (BPU), which refers to the level of Tc-99m sestamibi uptake within normal fibroglandular tissue on molecular breast imaging (MBI), has been identified as a breast cancer risk factor, independent of mammographic density. Prior analyses have used subjective categories to describe BPU. We evaluate a new quantitative method for assessing BPU by testing its reproducibility, comparing quantitative results with previously established subjective BPU categories, and determining the association of quantitative BPU with breast cancer risk. Two nonradiologist operators independently performed region-of-interest analysis on MBI images viewed in conjunction with corresponding digital mammograms. Quantitative BPU was defined as a unitless ratio of the average pixel intensity (counts/pixel) within the fibroglandular tissue versus the average pixel intensity in fat. Operator agreement and the correlation of quantitative BPU measures with subjective BPU categories assessed by expert radiologists were determined. Percent density on mammograms was estimated using Cumulus. The association of quantitative BPU with breast cancer (per one unit BPU) was examined within an established case-control study of 62 incident breast cancer cases and 177 matched controls. Quantitative BPU ranged from 0.4 to 3.2 across all subjects and was on average higher in cases compared to controls (1.4 versus 1.2, p < 0.007 for both operators). Quantitative BPU was strongly correlated with subjective BPU categories (Spearman's r = 0.59 to 0.69, p < 0.0001, for each paired combination of two operators and two radiologists). Interoperator and intraoperator agreement in the quantitative BPU measure, assessed by intraclass correlation, was 0.92 and 0.98, respectively. Quantitative BPU measures showed either no correlation or weak negative correlation with mammographic percent density. In a model adjusted for body mass index and percent density, higher quantitative BPU was associated with increased risk of breast cancer for both operators (OR = 4.0, 95% confidence interval (CI) 1.6-10.1, and 2.4, 95% CI 1.2-4.7). Quantitative measurement of BPU, defined as the ratio of average counts in fibroglandular tissue relative to that in fat, can be reliably performed by nonradiologist operators with a simple region-of-interest analysis tool. Similar to results obtained with subjective BPU categories, quantitative BPU is a functional imaging biomarker of breast cancer risk, independent of mammographic density and hormonal factors.

  20. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  1. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  2. Time-series analyses of air pollution and mortality in the United States: a subsampling approach.

    PubMed

    Moolgavkar, Suresh H; McClellan, Roger O; Dewanji, Anup; Turim, Jay; Luebeck, E Georg; Edwards, Melanie

    2013-01-01

    Hierarchical Bayesian methods have been used in previous papers to estimate national mean effects of air pollutants on daily deaths in time-series analyses. We obtained maximum likelihood estimates of the common national effects of the criteria pollutants on mortality based on time-series data from ≤ 108 metropolitan areas in the United States. We used a subsampling bootstrap procedure to obtain the maximum likelihood estimates and confidence bounds for common national effects of the criteria pollutants, as measured by the percentage increase in daily mortality associated with a unit increase in daily 24-hr mean pollutant concentration on the previous day, while controlling for weather and temporal trends. We considered five pollutants [PM10, ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2)] in single- and multipollutant analyses. Flexible ambient concentration-response models for the pollutant effects were considered as well. We performed limited sensitivity analyses with different degrees of freedom for time trends. In single-pollutant models, we observed significant associations of daily deaths with all pollutants. The O3 coefficient was highly sensitive to the degree of smoothing of time trends. Among the gases, SO2 and NO2 were most strongly associated with mortality. The flexible ambient concentration-response curve for O3 showed evidence of nonlinearity and a threshold at about 30 ppb. Differences between the results of our analyses and those reported from using the Bayesian approach suggest that estimates of the quantitative impact of pollutants depend on the choice of statistical approach, although results are not directly comparable because they are based on different data. In addition, the estimate of the O3-mortality coefficient depends on the amount of smoothing of time trends.

  3. QUANTITATIVE PLUTONIUM MICRODISTRIBUTION IN BONE TISSUE OF VERTEBRA FROM A MAYAK WORKER

    PubMed Central

    Lyovkina, Yekaterina V.; Miller, Scott C.; Romanov, Sergey A.; Krahenbuhl, Melinda P.; Belosokhov, Maxim V.

    2010-01-01

    The purpose was to obtain quantitative data on plutonium microdistribution in different structural elements of human bone tissue for local dose assessment and dosimetric models validation. A sample of the thoracic vertebra was obtained from a former Mayak worker with a rather high plutonium burden. Additional information was obtained on occupational and exposure history, medical history, and measured plutonium content in organs. Plutonium was detected in bone sections from its fission tracks in polycarbonate film using neutron-induced autoradiography. Quantitative analysis of randomly selected microscopic fields on one of the autoradiographs was performed. Data included fission fragment tracks in different bone tissue and surface areas. Quantitative information on plutonium microdistribution in human bone tissue was obtained for the first time. From these data, quantitative relationship of plutonium decays in bone volume to decays on bone surface in cortical and trabecular fractions were defined as 2.0 and 0.4, correspondingly. The measured quantitative relationship of decays in bone volume to decays on bone surface does not coincide with recommended models for the cortical bone fraction by the International Commission on Radiological Protection. Biokinetic model parameters of extrapulmonary compartments might need to be adjusted after expansion of the data set on quantitative plutonium microdistribution in other bone types in human as well as other cases with different exposure patterns and types of plutonium. PMID:20838087

  4. Cross-reactivity of anti-chicken IgY antibody with immunoglobulins of exotic avian species.

    PubMed

    Cray, Carolyn; Villar, David

    2008-09-01

    A major challenge in the serologic diagnosis of infectious diseases in exotic birds is the limited availability of species-specific antibodies. The purpose of the current study was to determine if there is cross reactivity between commercially available anti-chicken IgY antibodies and immunoglobulins of several avian species, with particular emphasis on psittacines. To quantitate the reactivity with anti-chicken IgY, Western blot analysis was performed using plasma samples from many different avian species. Results were compared with gamma globulin fraction quantitation obtained by protein electrophoresis. By Western blot, 2 protein bands corresponding to the heavy and light chains of chicken IgY were identified in species from 21 avian orders using 1 of 2 rabbit anti-chicken IgY antibodies. Densitometric analysis showed that the amount of immunoglobulin estimated from Western blots correlated strongly with data from protein electrophoresis assays. The results demonstrate that some commercially available anti-chicken IgY antibodies exhibit good cross-reactivity with most avian species.

  5. PCR-free Quantification of Multiple Splice Variants in Cancer Gene by Surface Enhanced Raman Spectroscopy

    PubMed Central

    Sun, Lan; Irudayaraj, Joseph

    2009-01-01

    We demonstrate a surface enhanced Raman spectroscopy (SERS) based array platform to monitor gene expression in cancer cells in a multiplex and quantitative format without amplification steps. A strategy comprising of DNA/RNA hybridization, S1 nuclease digestion, and alkaline hydrolysis was adopted to obtain DNA targets specific to two splice junction variants Δ(9, 10) and Δ(5) of the breast cancer susceptibility gene 1 (BRCA1) from MCF-7 and MDA-MB-231 breast cancer cell lines. These two targets were identified simultaneously and their absolute quantities were estimated by a SERS strategy utilizing the inherent plasmon-phonon Raman mode of gold nanoparticle probes as a self-referencing standard to correct for variability in surface enhancement. Results were then validated by reverse transcription PCR (RT-PCR). Our proposed methodology could be expanded to a higher level of multiplexing for quantitative gene expression analysis of any gene without any amplification steps. PMID:19780515

  6. Assessment of statistical uncertainty in the quantitative analysis of solid samples in motion using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Cabalín, L. M.; González, A.; Ruiz, J.; Laserna, J. J.

    2010-08-01

    Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s - 1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.

  7. A new software for dimensional measurements in 3D endodontic root canal instrumentation.

    PubMed

    Sinibaldi, Raffaele; Pecci, Raffaella; Somma, Francesco; Della Penna, Stefania; Bedini, Rossella

    2012-01-01

    The main issue to be faced to get size estimates of 3D modification of the dental canal after endodontic treatment is the co-registration of the image stacks obtained through micro computed tomography (micro-CT) scans before and after treatment. Here quantitative analysis of micro-CT images have been performed by means of new dedicated software targeted to the analysis of root canal after endodontic instrumentation. This software analytically calculates the best superposition between the pre and post structures using the inertia tensor of the tooth. This strategy avoid minimization procedures, which can be user dependent, and time consuming. Once the co-registration have been achieved dimensional measurements have then been performed by contemporary evaluation of quantitative parameters over the two superimposed stacks of micro-CT images. The software automatically calculated the changes of volume, surface and symmetry axes in 3D occurring after the instrumentation. The calculation is based on direct comparison of the canal and canal branches selected by the user on the pre treatment image stack.

  8. Global scaling for semi-quantitative analysis in FP-CIT SPECT.

    PubMed

    Kupitz, D; Apostolova, I; Lange, C; Ulrich, G; Amthauer, H; Brenner, W; Buchert, R

    2014-01-01

    Semi-quantitative characterization of dopamine transporter availability from single photon emission computed tomography (SPECT) with 123I-ioflupane (FP-CIT) is based on uptake ratios relative to a reference region. The aim of this study was to evaluate the whole brain as reference region for semi-quantitative analysis of FP-CIT SPECT. The rationale was that this might reduce statistical noise associated with the estimation of non-displaceable FP-CIT uptake. 150 FP-CIT SPECTs were categorized as neurodegenerative or non-neurodegenerative by an expert. Semi-quantitative analysis of specific binding ratios (SBR) was performed with a custom-made tool based on the Statistical Parametric Mapping software package using predefined regions of interest (ROIs) in the anatomical space of the Montreal Neurological Institute. The following reference regions were compared: predefined ROIs for frontal and occipital lobe and whole brain (without striata, thalamus and brainstem). Tracer uptake in the reference region was characterized by the mean, median or 75th percentile of its voxel intensities. The area (AUC) under the receiver operating characteristic curve was used as performance measure. The highest AUC of 0.973 was achieved by the SBR of the putamen with the 75th percentile in the whole brain as reference. The lowest AUC for the putamen SBR of 0.937 was obtained with the mean in the frontal lobe as reference. We recommend the 75th percentile in the whole brain as reference for semi-quantitative analysis in FP-CIT SPECT. This combination provided the best agreement of the semi-quantitative analysis with visual evaluation of the SPECT images by an expert and, therefore, is appropriate to support less experienced physicians.

  9. Semi-quantitative analysis of salivary gland scintigraphy in Sjögren's syndrome diagnosis: a first-line tool.

    PubMed

    Angusti, Tiziana; Pilati, Emanuela; Parente, Antonella; Carignola, Renato; Manfredi, Matteo; Cauda, Simona; Pizzigati, Elena; Dubreuil, Julien; Giammarile, Francesco; Podio, Valerio; Skanjeti, Andrea

    2017-09-01

    The aim of this study was the assessment of semi-quantified salivary gland dynamic scintigraphy (SGdS) parameters independently and in an integrated way in order to predict primary Sjögren's syndrome (pSS). Forty-six consecutive patients (41 females; age 61 ± 11 years) with sicca syndrome were studied by SGdS after injection of 200 MBq of pertechnetate. In sixteen patients, pSS was diagnosed, according to American-European Consensus Group criteria (AECGc). Semi-quantitative parameters (uptake (UP) and excretion fraction (EF)) were obtained for each gland. ROC curves were used to determine the best cut-off value. The area under the curve (AUC) was used to estimate the accuracy of each semi-quantitative analysis. To assess the correlation between scintigraphic results and disease severity, semi-quantitative parameters were plotted versus Sjögren's syndrome disease activity index (ESSDAI). A nomogram was built to perform an integrated evaluation of all the scintigraphic semi-quantitative data. Both UP and EF of salivary glands were significantly lower in pSS patients compared to those in non-pSS (p < 0.001). ROC curve showed significantly large AUC for both the parameters (p < 0.05). Parotid UP and submandibular EF, assessed by univariated and multivariate logistic regression, showed a significant and independent correlation with pSS diagnosis (p value <0.05). No correlation was found between SGdS semi-quantitative parameters and ESSDAI. The proposed nomogram accuracy was 87%. SGdS is an accurate and reproducible tool for the diagnosis of pSS. ESSDAI was not shown to be correlated with SGdS data. SGdS should be the first-line imaging technique in patients with suspected pSS.

  10. Assessment of antibody library diversity through next generation sequencing and technical error compensation

    PubMed Central

    Lisi, Simonetta; Chirichella, Michele; Arisi, Ivan; Goracci, Martina; Cremisi, Federico; Cattaneo, Antonino

    2017-01-01

    Antibody libraries are important resources to derive antibodies to be used for a wide range of applications, from structural and functional studies to intracellular protein interference studies to developing new diagnostics and therapeutics. Whatever the goal, the key parameter for an antibody library is its complexity (also known as diversity), i.e. the number of distinct elements in the collection, which directly reflects the probability of finding in the library an antibody against a given antigen, of sufficiently high affinity. Quantitative evaluation of antibody library complexity and quality has been for a long time inadequately addressed, due to the high similarity and length of the sequences of the library. Complexity was usually inferred by the transformation efficiency and tested either by fingerprinting and/or sequencing of a few hundred random library elements. Inferring complexity from such a small sampling is, however, very rudimental and gives limited information about the real diversity, because complexity does not scale linearly with sample size. Next-generation sequencing (NGS) has opened new ways to tackle the antibody library complexity quality assessment. However, much remains to be done to fully exploit the potential of NGS for the quantitative analysis of antibody repertoires and to overcome current limitations. To obtain a more reliable antibody library complexity estimate here we show a new, PCR-free, NGS approach to sequence antibody libraries on Illumina platform, coupled to a new bioinformatic analysis and software (Diversity Estimator of Antibody Library, DEAL) that allows to reliably estimate the complexity, taking in consideration the sequencing error. PMID:28505201

  11. Assessment of antibody library diversity through next generation sequencing and technical error compensation.

    PubMed

    Fantini, Marco; Pandolfini, Luca; Lisi, Simonetta; Chirichella, Michele; Arisi, Ivan; Terrigno, Marco; Goracci, Martina; Cremisi, Federico; Cattaneo, Antonino

    2017-01-01

    Antibody libraries are important resources to derive antibodies to be used for a wide range of applications, from structural and functional studies to intracellular protein interference studies to developing new diagnostics and therapeutics. Whatever the goal, the key parameter for an antibody library is its complexity (also known as diversity), i.e. the number of distinct elements in the collection, which directly reflects the probability of finding in the library an antibody against a given antigen, of sufficiently high affinity. Quantitative evaluation of antibody library complexity and quality has been for a long time inadequately addressed, due to the high similarity and length of the sequences of the library. Complexity was usually inferred by the transformation efficiency and tested either by fingerprinting and/or sequencing of a few hundred random library elements. Inferring complexity from such a small sampling is, however, very rudimental and gives limited information about the real diversity, because complexity does not scale linearly with sample size. Next-generation sequencing (NGS) has opened new ways to tackle the antibody library complexity quality assessment. However, much remains to be done to fully exploit the potential of NGS for the quantitative analysis of antibody repertoires and to overcome current limitations. To obtain a more reliable antibody library complexity estimate here we show a new, PCR-free, NGS approach to sequence antibody libraries on Illumina platform, coupled to a new bioinformatic analysis and software (Diversity Estimator of Antibody Library, DEAL) that allows to reliably estimate the complexity, taking in consideration the sequencing error.

  12. Hepatobiliary MRI: Signal intensity based assessment of liver function correlated to 13C-Methacetin breath test.

    PubMed

    Haimerl, Michael; Probst, Ute; Poelsterl, Stefanie; Beyer, Lukas; Fellner, Claudia; Selgrad, Michael; Hornung, Matthias; Stroszczynski, Christian; Wiggermann, Philipp

    2018-06-13

    Gadoxetic acid (Gd-EOB-DTPA) is a paramagnetic MRI contrast agent with raising popularity and has been used for evaluation of imaging-based liver function in recent years. In order to verify whether liver function as determined by real-time breath analysis using the intravenous administration of 13 C-methacetin can be estimated quantitatively from Gd-EOB-DTPA-enhanced MRI using signal intensity (SI) values. 110 patients underwent Gd-EOB-DTPA-enhanced 3-T MRI and, for the evaluation of liver function, a 13 C-methacetin breath test ( 13 C-MBT). SI values from before (SI pre ) and 20 min after (SI post ) contrast media injection were acquired by T1-weighted volume-interpolated breath-hold examination (VIBE) sequences with fat suppression. The relative enhancement (RE) between the plain and contrast-enhanced SI values was calculated and evaluated in a correlation analysis of 13 C-MBT values to SI post and RE to obtain a SI-based estimation of 13 C-MBT values. The simple regression model showed a log-linear correlation of 13 C-MBT values with SI post and RE (p < 0.001). Stratified by 3 different categories of 13 C-MBT readouts, there was a constant significant decrease in both SI post (p ≤ 0.002) and RE (p ≤ 0.033) with increasing liver disease progression as assessed by the 13 C-MBT. Liver function as determined using real-time 13 C-methacetin breath analysis can be estimated quantitatively from Gd-EOB-DTPA-enhanced MRI using SI-based indices.

  13. Aliphatic Hydrocarbon Content of Interstellar Dust

    NASA Astrophysics Data System (ADS)

    Günay, B.; Schmidt, T. W.; Burton, M. G.; Afşar, M.; Krechkivska, O.; Nauta, K.; Kable, S. H.; Rawal, A.

    2018-06-01

    There is considerable uncertainty as to the amount of carbon incorporated in interstellar dust. The aliphatic component of the carbonaceous dust is of particular interest because it produces a significant 3.4 μm absorption feature when viewed against a background radiation source. The optical depth of the 3.4 μm absorption feature is related to the number of aliphatic carbon C-H bonds along the line of sight. It is possible to estimate the column density of carbon locked up in the aliphatic hydrocarbon component of interstellar dust from quantitative analysis of the 3.4 μm interstellar absorption feature providing that the absorption coefficient of aliphatic hydrocarbons incorporated in the interstellar dust is known. We report laboratory analogues of interstellar dust by experimentally mimicking interstellar/circumstellar conditions. The resultant spectra of these dust analogues closely match those from astronomical observations. Measurements of the absorption coefficient of aliphatic hydrocarbons incorporated in the analogues were carried out by a procedure combining FTIR and 13C NMR spectroscopies. The absorption coefficients obtained for both interstellar analogues were found to be in close agreement (4.76(8) × 10-18 cm group-1 and 4.69(14) × 10-18 cm group-1), less than half those obtained in studies using small aliphatic molecules. The results thus obtained permit direct calibration of the astronomical observations, providing rigorous estimates of the amount of aliphatic carbon in the interstellar medium.

  14. Harnessing quantitative genetics and genomics for understanding and improving complex traits in crops

    USDA-ARS?s Scientific Manuscript database

    Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...

  15. Estimation of Biochemical Constituents From Fresh, Green Leaves By Spectrum Matching Techniques

    NASA Technical Reports Server (NTRS)

    Goetz, A. F. H.; Gao, B. C.; Wessman, C. A.; Bowman, W. D.

    1990-01-01

    Estimation of biochemical constituents in vegetation such as lignin, cellulose, starch, sugar and protein by remote sensing methods is an important goal in ecological research. The spectral reflectances of dried leaves exhibit diagnostic absorption features which can be used to estimate the abundance of important constituents. Lignin and nitrogen concentrations have been obtained from canopies by use of imaging spectrometry and multiple linear regression techniques. The difficulty in identifying individual spectra of leaf constituents in the region beyond 1 micrometer is that liquid water contained in the leaf dominates the spectral reflectance of leaves in this region. By use of spectrum matching techniques, originally used to quantify whole column water abundance in the atmosphere and equivalent liquid water thickness in leaves, we have been able to remove the liquid water contribution to the spectrum. The residual spectra resemble spectra for cellulose in the 1.1 micrometer region, lignin in the 1.7 micrometer region, and starch in the 2.0-2.3 micrometer region. In the entire 1.0-2.3 micrometer region each of the major constituents contributes to the spectrum. Quantitative estimates will require using unmixing techniques on the residual spectra.

  16. Upper Limb Posture Estimation in Robotic and Virtual Reality-Based Rehabilitation

    PubMed Central

    Cortés, Camilo; Ardanza, Aitor; Molina-Rueda, F.; Cuesta-Gómez, A.; Ruiz, Oscar E.

    2014-01-01

    New motor rehabilitation therapies include virtual reality (VR) and robotic technologies. In limb rehabilitation, limb posture is required to (1) provide a limb realistic representation in VR games and (2) assess the patient improvement. When exoskeleton devices are used in the therapy, the measurements of their joint angles cannot be directly used to represent the posture of the patient limb, since the human and exoskeleton kinematic models differ. In response to this shortcoming, we propose a method to estimate the posture of the human limb attached to the exoskeleton. We use the exoskeleton joint angles measurements and the constraints of the exoskeleton on the limb to estimate the human limb joints angles. This paper presents (a) the mathematical formulation and solution to the problem, (b) the implementation of the proposed solution on a commercial exoskeleton system for the upper limb rehabilitation, (c) its integration into a rehabilitation VR game platform, and (d) the quantitative assessment of the method during elbow and wrist analytic training. Results show that this method properly estimates the limb posture to (i) animate avatars that represent the patient in VR games and (ii) obtain kinematic data for the patient assessment during elbow and wrist analytic rehabilitation. PMID:25110698

  17. Bridging the Global Precipitation and Soil Moisture Active Passive Missions: Variability of Microwave Surface Emissivity from In situ and Remote Sensing Perspectives

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Kirstetter, P.; Hong, Y.; Turk, J.

    2016-12-01

    The overland precipitation retrievals from satellite passive microwave (PMW) sensors such as the Global Precipitation Mission (GPM) microwave imager (GMI) are impacted by the land surface emissivity. The estimation of PMW emissivity faces challenges because it is highly variable under the influence of surface properties such as soil moisture, surface roughness and vegetation. This study proposes an improved quantitative understanding of the relationship between the emissivity and surface parameters. Surface parameter information is obtained through (i) in-situ measurements from the International Soil Moisture Network and (ii) satellite measurements from the Soil Moisture Active and Passive mission (SMAP) which provides global scale soil moisture estimates. The variation of emissivity is quantified with soil moisture, surface temperature and vegetation at various frequencies/polarization and over different types of land surfaces to sheds light into the processes governing the emission of the land. This analysis is used to estimate the emissivity under rainy conditions. The framework built with in-situ measurements serves as a benchmark for satellite-based analyses, which paves a way toward global scale emissivity estimates using SMAP.

  18. Application of counterpropagation artificial neural network for modelling properties of fish antibiotics.

    PubMed

    Maran, E; Novic, M; Barbieri, P; Zupan, J

    2004-01-01

    The present study focuses on fish antibiotics which are an important group of pharmaceuticals used in fish farming to treat infections and, until recently, most of them have been exposed to the environment with very little attention. Information about the environmental behaviour and the description of the environmental fate of medical substances are difficult or expensive to obtain. The experimental information in terms of properties is reported when available, in other cases, it is estimated by standard tools as those provided by the United States Environmental Protection Agency EPISuite software and by custom quantitative structure-activity relationship (QSAR) applications. In this study, a QSAR screening of 15 fish antibiotics and 132 xenobiotic molecules was performed with two aims: (i) to develop a model for the estimation of octanol--water partition coefficient (logP) and (ii) to estimate the relative binding affinity to oestrogen receptor (log RBA) using a model constructed on the activities of 132 xenobiotic compounds. The custom models are based on constitutional, topological, electrostatic and quantum chemical descriptors computed by the CODESSA software. Kohonen neural networks (self organising maps) were used to study similarity between the considered chemicals while counter-propagation artificial neural networks were used to estimate the properties.

  19. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast

    PubMed Central

    Pang, Wei; Coghill, George M.

    2015-01-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377

  20. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.

  1. Novel methods to estimate the enantiomeric ratio and the kinetic parameters of enantiospecific enzymatic reactions.

    PubMed

    Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.

    2001-03-08

    1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.

  2. A robust approach for ECG-based analysis of cardiopulmonary coupling.

    PubMed

    Zheng, Jiewen; Wang, Weidong; Zhang, Zhengbo; Wu, Dalei; Wu, Hao; Peng, Chung-Kang

    2016-07-01

    Deriving respiratory signal from a surface electrocardiogram (ECG) measurement has advantage of simultaneously monitoring of cardiac and respiratory activities. ECG-based cardiopulmonary coupling (CPC) analysis estimated by heart period variability and ECG-derived respiration (EDR) shows promising applications in medical field. The aim of this paper is to provide a quantitative analysis of the ECG-based CPC, and further improve its performance. Two conventional strategies were tested to obtain EDR signal: R-S wave amplitude and area of the QRS complex. An adaptive filter was utilized to extract the common component of inter-beat interval (RRI) and EDR, generating enhanced versions of EDR signal. CPC is assessed through probing the nonlinear phase interactions between RRI series and respiratory signal. Respiratory oscillations presented in both RRI series and respiratory signals were extracted by ensemble empirical mode decomposition for coupling analysis via phase synchronization index. The results demonstrated that CPC estimated from conventional EDR series exhibits constant and proportional biases, while that estimated from enhanced EDR series is more reliable. Adaptive filtering can improve the accuracy of the ECG-based CPC estimation significantly and achieve robust CPC analysis. The improved ECG-based CPC estimation may provide additional prognostic information for both sleep medicine and autonomic function analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Plant leaf chlorophyll content retrieval based on a field imaging spectroscopy system.

    PubMed

    Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin

    2014-10-23

    A field imaging spectrometer system (FISS; 380-870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%-35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector.

  4. Plant Leaf Chlorophyll Content Retrieval Based on a Field Imaging Spectroscopy System

    PubMed Central

    Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin

    2014-01-01

    A field imaging spectrometer system (FISS; 380–870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%–35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector. PMID:25341439

  5. Optical coherence elastography (OCE) as a method for identifying benign and malignant prostate biopsies

    NASA Astrophysics Data System (ADS)

    Li, Chunhui; Guan, Guangying; Ling, Yuting; Lang, Stephen; Wang, Ruikang K.; Huang, Zhihong; Nabi, Ghulam

    2015-03-01

    Objectives. Prostate cancer is the most frequently diagnosed malignancy in men. Digital rectal examination (DRE) - a known clinical tool based on alteration in the mechanical properties of tissues due to cancer has traditionally been used for screening prostate cancer. Essentially, DRE estimates relative stiffness of cancerous and normal prostate tissue. Optical coherence elastography (OCE) are new optical imaging techniques capable of providing cross-sectional imaging of tissue microstructure as well as elastogram in vivo and in real time. In this preliminary study, OCE was used in the setting of the human prostate biopsies ex vivo, and the images acquired were compared with those obtained using standard histopathologic methods. Methods. 120 prostate biopsies were obtained by TRUS guided needle biopsy procedures from 9 patients with clinically suspected cancer of the prostate. The biopsies were approximately 0.8mm in diameter and 12mm in length, and prepared in Formalin solution. Quantitative assessment of biopsy samples using OCE was obtained in kilopascals (kPa) before histopathologic evaluation. The results obtained from OCE and standard histopathologic evaluation were compared provided the cross-validation. Sensitivity, specificity, and positive and negative predictive values were calculated for OCE (histopathology was a reference standard). Results. OCE could provide quantitative elasticity properties of prostate biopsies within benign prostate tissue, prostatic intraepithelial neoplasia, atypical hyperplasia and malignant prostate cancer. Data analysed showed that the sensitivity and specificity of OCE for PCa detection were 1 and 0.91, respectively. PCa had significantly higher stiffness values compared to benign tissues, with a trend of increasing in stiffness with increasing of malignancy. Conclusions. Using OCE, microscopic resolution elastogram is promising in diagnosis of human prostatic diseases. Further studies using this technique to improve the detection and staging of malignant cancer of the prostate are ongoing.

  6. Quantitative maps of genetic interactions in yeast - comparative evaluation and integrative analysis.

    PubMed

    Lindén, Rolf O; Eronen, Ville-Pekka; Aittokallio, Tero

    2011-03-24

    High-throughput genetic screening approaches have enabled systematic means to study how interactions among gene mutations contribute to quantitative fitness phenotypes, with the aim of providing insights into the functional wiring diagrams of genetic interaction networks on a global scale. However, it is poorly known how well these quantitative interaction measurements agree across the screening approaches, which hinders their integrated use toward improving the coverage and quality of the genetic interaction maps in yeast and other organisms. Using large-scale data matrices from epistatic miniarray profiling (E-MAP), genetic interaction mapping (GIM), and synthetic genetic array (SGA) approaches, we carried out here a systematic comparative evaluation among these quantitative maps of genetic interactions in yeast. The relatively low association between the original interaction measurements or their customized scores could be improved using a matrix-based modelling framework, which enables the use of single- and double-mutant fitness estimates and measurements, respectively, when scoring genetic interactions. Toward an integrative analysis, we show how the detections from the different screening approaches can be combined to suggest novel positive and negative interactions which are complementary to those obtained using any single screening approach alone. The matrix approximation procedure has been made available to support the design and analysis of the future screening studies. We have shown here that even if the correlation between the currently available quantitative genetic interaction maps in yeast is relatively low, their comparability can be improved by means of our computational matrix approximation procedure, which will enable integrative analysis and detection of a wider spectrum of genetic interactions using data from the complementary screening approaches.

  7. 76 FR 13018 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. Total Burden Estimate for the...

  8. Hybrid quantitative MRI using chemical shift displacement and recovery-based simultaneous water and lipid imaging: A preliminary study.

    PubMed

    Ohno, Naoki; Miyati, Tosiaki; Suzuki, Shuto; Kan, Hirohito; Aoki, Toshitaka; Nakamura, Yoshitaka; Hiramatsu, Yuki; Kobayashi, Satoshi; Gabata, Toshifumi

    2018-07-01

    To suppress olefinic signals and enable simultaneous and quantitative estimation of multiple functional parameters associated with water and lipid, we investigated a modified method using chemical shift displacement and recovery-based separation of lipid tissue (SPLIT) involving acquisitions with different inversion times (TIs), echo times (TEs), and b-values. Single-shot diffusion echo-planar imaging (SSD-EPI) with multiple b-values (0-3000 s/mm 2 ) was performed without fat suppression to separate water and lipid images using the chemical shift displacement of lipid signals in the phase-encoding direction. An inversion pulse (TI = 292 ms) was applied to SSD-EPI to remove olefinic signals. Consecutively, SSD-EPI (b = 0 s/mm 2 ) was performed with TI = 0 ms and TE = 31.8 ms for T 1 and T 2 measurements, respectively. Under these conditions, transverse water and lipid images at the maximum diameter of the right calf were obtained in six healthy subjects. T 1 , T 2 , and the apparent diffusion coefficients (ADC) were then calculated for the tibialis anterior (TA), gastrocnemius (GM), and soleus (SL) muscles, tibialis bone marrow (TB), and subcutaneous fat (SF). Perfusion-related (D*) and restricted diffusion coefficients (D) were calculated for the muscles. Lastly, the lipid fractions (LF) of the muscles were determined after T 1 and T 2 corrections. The modified SPLIT method facilitated sufficient separation of water and lipid images of the calf, and the inversion pulse with TI of 292 ms effectively suppressed olefinic signals. All quantitative parameters obtained with the modified SPLIT method were found to be in general agreement with those previously reported in the literature. The modified SPLIT technique enabled sufficient suppression of olefinic signals and simultaneous acquisition of quantitative parameters including diffusion, perfusion, T 1 and T 2 relaxation times, and LF. Copyright © 2018. Published by Elsevier Inc.

  9. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    PubMed

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.

  10. Structural study of gold clusters.

    PubMed

    Xiao, Li; Tollberg, Bethany; Hu, Xiankui; Wang, Lichang

    2006-03-21

    Density functional theory (DFT) calculations were carried out to study gold clusters of up to 55 atoms. Between the linear and zigzag monoatomic Au nanowires, the zigzag nanowires were found to be more stable. Furthermore, the linear Au nanowires of up to 2 nm are formed by slightly stretched Au dimers. These suggest that a substantial Peierls distortion exists in those structures. Planar geometries of Au clusters were found to be the global minima till the cluster size of 13. A quantitative correlation is provided between various properties of Au clusters and the structure and size. The relative stability of selected clusters was also estimated by the Sutton-Chen potential, and the result disagrees with that obtained from the DFT calculations. This suggests that a modification of the Sutton-Chen potential has to be made, such as obtaining new parameters, in order to use it to search the global minima for bigger Au clusters.

  11. Interactive degraded document enhancement and ground truth generation

    NASA Astrophysics Data System (ADS)

    Bal, G.; Agam, G.; Frieder, O.; Frieder, G.

    2008-01-01

    Degraded documents are frequently obtained in various situations. Examples of degraded document collections include historical document depositories, document obtained in legal and security investigations, and legal and medical archives. Degraded document images are hard to to read and are hard to analyze using computerized techniques. There is hence a need for systems that are capable of enhancing such images. We describe a language-independent semi-automated system for enhancing degraded document images that is capable of exploiting inter- and intra-document coherence. The system is capable of processing document images with high levels of degradations and can be used for ground truthing of degraded document images. Ground truthing of degraded document images is extremely important in several aspects: it enables quantitative performance measurements of enhancement systems and facilitates model estimation that can be used to improve performance. Performance evaluation is provided using the historical Frieder diaries collection.1

  12. A fluorometric paper-based sensor array for the discrimination of heavy-metal ions.

    PubMed

    Feng, Liang; Li, Hui; Niu, Li-Ya; Guan, Ying-Shi; Duan, Chun-Feng; Guan, Ya-Feng; Tung, Chen-Ho; Yang, Qing-Zheng

    2013-04-15

    A fluorometric paper-based sensor array has been developed for the sensitive and convenient determination of seven heavy-metal ions at their wastewater discharge standard concentrations. Combining with nine cross-reactive BODIPY fluorescent indicators and array technologies-based pattern-recognition, we have obtained the discrimination capability of seven different heavy-metal ions at their wastewater discharge standard concentrations. After the immobilization of indicators and the enrichment of analytes, identification of the heavy-metal ions was readily acquired using a standard chemometric approach. Clear differentiation among heavy-metal ions as a function of concentration was also achieved, even down to 10(-7)M. A semi-quantitative estimation of the heavy-metal ion concentration was obtained by comparing color changes with a set of known concentrations. The sensor array was tentatively investigated in spiked tap water and sea water, and showed possible feasibility for real sample testing. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Critical asset and portfolio risk analysis: an all-hazards framework.

    PubMed

    Ayyub, Bilal M; McGill, William L; Kaminskiy, Mark

    2007-08-01

    This article develops a quantitative all-hazards framework for critical asset and portfolio risk analysis (CAPRA) that considers both natural and human-caused hazards. Following a discussion on the nature of security threats, the need for actionable risk assessments, and the distinction between asset and portfolio-level analysis, a general formula for all-hazards risk analysis is obtained that resembles the traditional model based on the notional product of consequence, vulnerability, and threat, though with clear meanings assigned to each parameter. Furthermore, a simple portfolio consequence model is presented that yields first-order estimates of interdependency effects following a successful attack on an asset. Moreover, depending on the needs of the decisions being made and available analytical resources, values for the parameters in this model can be obtained at a high level or through detailed systems analysis. Several illustrative examples of the CAPRA methodology are provided.

  14. Computational methods in the development of a knowledge-based system for the prediction of solid catalyst performance.

    PubMed

    Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi

    2007-01-01

    The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.

  15. Cross-domain transfer of quantitative discriminations: is it all a matter of proportion?

    PubMed

    Balci, Fuat; Gallistel, Charles R

    2006-08-01

    Meck and Church (1983) estimated a 5:1 scale factor relating the mental magnitudes representing number to the mental magnitudes representing duration. We repeated their experiment with human subjects. We obtained transfer regardless of the objective scaling between the ranges; a 5:1 scaling for number versus duration (measured in seconds) was not necessary. We obtained transfer even when the proportions between the endpoints of the number range were different. We conclude that, at least in human subjects, transfer from a discrimination based on continuous quantity (duration) to a discrimination based on discrete quantity (number) is mediated by the cross-domain comparability of within-domain proportions. The results of our second and third experiments also suggest that the subjects compare a probe with a criterion determined by the range of stimuli tested rather than by trial-specific referents, in accordance with the pseudologistic model of Killeen, Fetterman, and Bizo (1997).

  16. Quantitative estimation of pesticide-likeness for agrochemical discovery.

    PubMed

    Avram, Sorin; Funar-Timofei, Simona; Borota, Ana; Chennamaneni, Sridhar Rao; Manchala, Anil Kumar; Muresan, Sorel

    2014-12-01

    The design of chemical libraries, an early step in agrochemical discovery programs, is frequently addressed by means of qualitative physicochemical and/or topological rule-based methods. The aim of this study is to develop quantitative estimates of herbicide- (QEH), insecticide- (QEI), fungicide- (QEF), and, finally, pesticide-likeness (QEP). In the assessment of these definitions, we relied on the concept of desirability functions. We found a simple function, shared by the three classes of pesticides, parameterized particularly, for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings. Subsequently, we describe the scoring of each pesticide class by the corresponding quantitative estimate. In a comparative study, we assessed the performance of the scoring functions using extensive datasets of patented pesticides. The hereby-established quantitative assessment has the ability to rank compounds whether they fail well-established pesticide-likeness rules or not, and offer an efficient way to prioritize (class-specific) pesticides. These findings are valuable for the efficient estimation of pesticide-likeness of vast chemical libraries in the field of agrochemical discovery. Graphical AbstractQuantitative models for pesticide-likeness were derived using the concept of desirability functions parameterized for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings.

  17. Pulse-echo sound speed estimation using second order speckle statistics

    NASA Astrophysics Data System (ADS)

    Rosado-Mendez, Ivan M.; Nam, Kibo; Madsen, Ernest L.; Hall, Timothy J.; Zagzebski, James A.

    2012-10-01

    This work presents a phantom-based evaluation of a method for estimating soft-tissue speeds of sound using pulse-echo data. The method is based on the improvement of image sharpness as the sound speed value assumed during beamforming is systematically matched to the tissue sound speed. The novelty of this work is the quantitative assessment of image sharpness by measuring the resolution cell size from the autocovariance matrix for echo signals from a random distribution of scatterers thus eliminating the need of strong reflectors. Envelope data were obtained from a fatty-tissue mimicking (FTM) phantom (sound speed = 1452 m/s) and a nonfatty-tissue mimicking (NFTM) phantom (1544 m/s) scanned with a linear array transducer on a clinical ultrasound system. Dependence on pulse characteristics was tested by varying the pulse frequency and amplitude. On average, sound speed estimation errors were -0.7% for the FTM phantom and -1.1% for the NFTM phantom. In general, no significant difference was found among errors from different pulse frequencies and amplitudes. The method is currently being optimized for the differentiation of diffuse liver diseases.

  18. A bayesian approach for determining velocity and uncertainty estimates from seismic cone penetrometer testing or vertical seismic profiling data

    USGS Publications Warehouse

    Pidlisecky, Adam; Haines, S.S.

    2011-01-01

    Conventional processing methods for seismic cone penetrometer data present several shortcomings, most notably the absence of a robust velocity model uncertainty estimate. We propose a new seismic cone penetrometer testing (SCPT) data-processing approach that employs Bayesian methods to map measured data errors into quantitative estimates of model uncertainty. We first calculate travel-time differences for all permutations of seismic trace pairs. That is, we cross-correlate each trace at each measurement location with every trace at every other measurement location to determine travel-time differences that are not biased by the choice of any particular reference trace and to thoroughly characterize data error. We calculate a forward operator that accounts for the different ray paths for each measurement location, including refraction at layer boundaries. We then use a Bayesian inversion scheme to obtain the most likely slowness (the reciprocal of velocity) and a distribution of probable slowness values for each model layer. The result is a velocity model that is based on correct ray paths, with uncertainty bounds that are based on the data error. ?? NRC Research Press 2011.

  19. Probabilistic Mass Growth Uncertainties

    NASA Technical Reports Server (NTRS)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  20. Leaf-on canopy closure in broadleaf deciduous forests predicted during winter

    USGS Publications Warehouse

    Twedt, Daniel J.; Ayala, Andrea J.; Shickel, Madeline R.

    2015-01-01

    Forest canopy influences light transmittance, which in turn affects tree regeneration and survival, thereby having an impact on forest composition and habitat conditions for wildlife. Because leaf area is the primary impediment to light penetration, quantitative estimates of canopy closure are normally made during summer. Studies of forest structure and wildlife habitat that occur during winter, when deciduous trees have shed their leaves, may inaccurately estimate canopy closure. We estimated percent canopy closure during both summer (leaf-on) and winter (leaf-off) in broadleaf deciduous forests in Mississippi and Louisiana using gap light analysis of hemispherical photographs that were obtained during repeat visits to the same locations within bottomland and mesic upland hardwood forests and hardwood plantation forests. We used mixed-model linear regression to predict leaf-on canopy closure from measurements of leaf-off canopy closure, basal area, stem density, and tree height. Competing predictive models all included leaf-off canopy closure (relative importance = 0.93), whereas basal area and stem density, more traditional predictors of canopy closure, had relative model importance of ≤ 0.51.

  1. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  2. Pulsed photoacoustic flow imaging with a handheld system

    NASA Astrophysics Data System (ADS)

    van den Berg, Pim J.; Daoudi, Khalid; Steenbergen, Wiendelt

    2016-02-01

    Flow imaging is an important technique in a range of disease areas, but estimating low flow speeds, especially near the walls of blood vessels, remains challenging. Pulsed photoacoustic flow imaging can be an alternative since there is little signal contamination from background tissue with photoacoustic imaging. We propose flow imaging using a clinical photoacoustic system that is both handheld and portable. The system integrates a linear array with 7.5 MHz central frequency in combination with a high-repetition-rate diode laser to allow high-speed photoacoustic imaging-ideal for this application. This work shows the flow imaging performance of the system in vitro using microparticles. Both two-dimensional (2-D) flow images and quantitative flow velocities from 12 to 75 mm/s were obtained. In a transparent bulk medium, flow estimation showed standard errors of ˜7% the estimated speed; in the presence of tissue-realistic optical scattering, the error increased to 40% due to limited signal-to-noise ratio. In the future, photoacoustic flow imaging can potentially be performed in vivo using fluorophore-filled vesicles or with an improved setup on whole blood.

  3. Average intragranular misorientation trends in polycrystalline materials predicted by a viscoplastic self-consistent approach

    DOE PAGES

    Lebensohn, Ricardo A.; Zecevic, Miroslav; Knezevic, Marko; ...

    2015-12-15

    Here, this work presents estimations of average intragranular fluctuations of lattice rotation rates in polycrystalline materials, obtained by means of the viscoplastic self-consistent (VPSC) model. These fluctuations give a tensorial measure of the trend of misorientation developing inside each single crystal grain representing a polycrystalline aggregate. We first report details of the algorithm implemented in the VPSC code to estimate these fluctuations, which are then validated by comparison with corresponding full-field calculations. Next, we present predictions of average intragranular fluctuations of lattice rotation rates for cubic aggregates, which are rationalized by comparison with experimental evidence on annealing textures of fccmore » and bcc polycrystals deformed in tension and compression, respectively, as well as with measured intragranular misorientation distributions in a Cu polycrystal deformed in tension. The orientation-dependent and micromechanically-based estimations of intragranular misorientations that can be derived from the present implementation are necessary to formulate sound sub-models for the prediction of quantitatively accurate deformation textures, grain fragmentation, and recrystallization textures using the VPSC approach.« less

  4. Quantitative methods for estimating the anisotropy of the strength properties and the phase composition of Mg-Al alloys

    NASA Astrophysics Data System (ADS)

    Betsofen, S. Ya.; Kolobov, Yu. R.; Volkova, E. F.; Bozhko, S. A.; Voskresenskaya, I. I.

    2015-04-01

    Quantitative methods have been developed to estimate the anisotropy of the strength properties and to determine the phase composition of Mg-Al alloys. The efficiency of the methods is confirmed for MA5 alloy subjected to severe plastic deformation. It is shown that the Taylor factors calculated for basal slip averaged over all orientations of a polycrystalline aggregate with allowance for texture can be used for a quantitative estimation of the contribution of the texture of semifinished magnesium alloy products to the anisotropy of their strength properties. A technique of determining the composition of a solid solution and the intermetallic phase Al12Mg17 content is developed using the measurement of the lattice parameters of the solid solution and the known dependence of these lattice parameters on the composition.

  5. A simple approach to quantitative analysis using three-dimensional spectra based on selected Zernike moments.

    PubMed

    Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li

    2013-01-21

    A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.

  6. Quantitative Aging Pattern in Mouse Urine Vapor as Measured by Gas-Liquid Chromatography

    NASA Technical Reports Server (NTRS)

    Robinson, Arthur B.; Dirren, Henri; Sheets, Alan; Miquel, Jaime; Lundgren, Paul R.

    1975-01-01

    We have discovered a quantitative aging pattern in mouse urine vapor. The diagnostic power of the pattern has been found to be high. We hope that this pattern will eventually allow quantitative estimates of physiological age and some insight into the biochemistry of aging.

  7. Discordance between Prevalent Vertebral Fracture and Vertebral Strength Estimated by the Finite Element Method Based on Quantitative Computed Tomography in Patients with Type 2 Diabetes Mellitus

    PubMed Central

    2015-01-01

    Background Bone fragility is increased in patients with type 2 diabetes mellitus (T2DM), but a useful method to estimate bone fragility in T2DM patients is lacking because bone mineral density alone is not sufficient to assess the risk of fracture. This study investigated the association between prevalent vertebral fractures (VFs) and the vertebral strength index estimated by the quantitative computed tomography-based nonlinear finite element method (QCT-based nonlinear FEM) using multi-detector computed tomography (MDCT) for clinical practice use. Research Design and Methods A cross-sectional observational study was conducted on 54 postmenopausal women and 92 men over 50 years of age, all of whom had T2DM. The vertebral strength index was compared in patients with and without VFs confirmed by spinal radiographs. A standard FEM procedure was performed with the application of known parameters for the bone material properties obtained from nondiabetic subjects. Results A total of 20 women (37.0%) and 39 men (42.4%) with VFs were identified. The vertebral strength index was significantly higher in the men than in the women (P<0.01). Multiple regression analysis demonstrated that the vertebral strength index was significantly and positively correlated with the spinal bone mineral density (BMD) and inversely associated with age in both genders. There were no significant differences in the parameters, including the vertebral strength index, between patients with and without VFs. Logistic regression analysis adjusted for age, spine BMD, BMI, HbA1c, and duration of T2DM did not indicate a significant relationship between the vertebral strength index and the presence of VFs. Conclusion The vertebral strength index calculated by QCT-based nonlinear FEM using material property parameters obtained from nondiabetic subjects, whose risk of fracture is lower than that of T2DM patients, was not significantly associated with bone fragility in patients with T2DM. This discordance may indirectly suggest that patients with T2DM have deteriorated bone material compared with nondiabetic subjects, a potential cause of bone fragility in T2DM patients. PMID:26642210

  8. Rockfall induced seismic signals: case study in Montserrat, Catalonia

    NASA Astrophysics Data System (ADS)

    Vilajosana, I.; Suriñach, E.; Abellán, A.; Khazaradze, G.; Garcia, D.; Llosa, J.

    2008-08-01

    After a rockfall event, a usual post event survey includes qualitative volume estimation, trajectory mapping and determination of departing zones. However, quantitative measurements are not usually made. Additional relevant quantitative information could be useful in determining the spatial occurrence of rockfall events and help us in quantifying their size. Seismic measurements could be suitable for detection purposes since they are non invasive methods and are relatively inexpensive. Moreover, seismic techniques could provide important information on rockfall size and location of impacts. On 14 February 2007 the Avalanche Group of the University of Barcelona obtained the seismic data generated by an artificially triggered rockfall event at the Montserrat massif (near Barcelona, Spain) carried out in order to purge a slope. Two 3 component seismic stations were deployed in the area about 200 m from the explosion point that triggered the rockfall. Seismic signals and video images were simultaneously obtained. The initial volume of the rockfall was estimated to be 75 m3 by laser scanner data analysis. After the explosion, dozens of boulders ranging from 10-4 to 5 m3 in volume impacted on the ground at different locations. The blocks fell down onto a terrace, 120 m below the release zone. The impact generated a small continuous mass movement composed of a mixture of rocks, sand and dust that ran down the slope and impacted on the road 60 m below. Time, time-frequency evolution and particle motion analysis of the seismic records and seismic energy estimation were performed. The results are as follows: 1 A rockfall event generates seismic signals with specific characteristics in the time domain; 2 the seismic signals generated by the mass movement show a time-frequency evolution different from that of other seismogenic sources (e.g. earthquakes, explosions or a single rock impact). This feature could be used for detection purposes; 3 particle motion plot analysis shows that the procedure to locate the rock impact using two stations is feasible; 4 The feasibility and validity of seismic methods for the detection of rockfall events, their localization and size determination are comfirmed.

  9. Quantitative Determination of Spring Water Quality Parameters via Electronic Tongue.

    PubMed

    Carbó, Noèlia; López Carrero, Javier; Garcia-Castillo, F Javier; Tormos, Isabel; Olivas, Estela; Folch, Elisa; Alcañiz Fillol, Miguel; Soto, Juan; Martínez-Máñez, Ramón; Martínez-Bisbal, M Carmen

    2017-12-25

    The use of a voltammetric electronic tongue for the quantitative analysis of quality parameters in spring water is proposed here. The electronic voltammetric tongue consisted of a set of four noble electrodes (iridium, rhodium, platinum, and gold) housed inside a stainless steel cylinder. These noble metals have a high durability and are not demanding for maintenance, features required for the development of future automated equipment. A pulse voltammetry study was conducted in 83 spring water samples to determine concentrations of nitrate (range: 6.9-115 mg/L), sulfate (32-472 mg/L), fluoride (0.08-0.26 mg/L), chloride (17-190 mg/L), and sodium (11-94 mg/L) as well as pH (7.3-7.8). These parameters were also determined by routine analytical methods in spring water samples. A partial least squares (PLS) analysis was run to obtain a model to predict these parameter. Orthogonal signal correction (OSC) was applied in the preprocessing step. Calibration (67%) and validation (33%) sets were selected randomly. The electronic tongue showed good predictive power to determine the concentrations of nitrate, sulfate, chloride, and sodium as well as pH and displayed a lower R² and slope in the validation set for fluoride. Nitrate and fluoride concentrations were estimated with errors lower than 15%, whereas chloride, sulfate, and sodium concentrations as well as pH were estimated with errors below 10%.

  10. Arterial Spin Labeling - Fast Imaging with Steady-State Free Precession (ASL-FISP): A Rapid and Quantitative Perfusion Technique for High Field MRI

    PubMed Central

    Gao, Ying; Goodnough, Candida L.; Erokwu, Bernadette O.; Farr, George W.; Darrah, Rebecca; Lu, Lan; Dell, Katherine M.; Yu, Xin; Flask, Chris A.

    2014-01-01

    Arterial Spin Labeling (ASL) is a valuable non-contrast perfusion MRI technique with numerous clinical applications. Many previous ASL MRI studies have utilized either Echo-Planar Imaging (EPI) or True Fast Imaging with Steady-State Free Precession (True FISP) readouts that are prone to off-resonance artifacts on high field MRI scanners. We have developed a rapid ASL-FISP MRI acquisition for high field preclinical MRI scanners providing perfusion-weighted images with little or no artifacts in less than 2 seconds. In this initial implementation, a FAIR (Flow-Sensitive Alternating Inversion Recovery) ASL preparation was combined with a rapid, centrically-encoded FISP readout. Validation studies on healthy C57/BL6 mice provided consistent estimation of in vivo mouse brain perfusion at 7 T and 9.4 T (249±38 ml/min/100g and 241±17 ml/min/100g, respectively). The utility of this method was further demonstrated in detecting significant perfusion deficits in a C57/BL6 mouse model of ischemic stroke. Reasonable kidney perfusion estimates were also obtained for a healthy C57/BL6 mouse exhibiting differential perfusion in the renal cortex and medulla. Overall, the ASL-FISP technique provides a rapid and quantitative in vivo assessment of tissue perfusion for high field MRI scanners with minimal image artifacts. PMID:24891124

  11. Quantitative Assessment of Cervical Vertebral Maturation Using Cone Beam Computed Tomography in Korean Girls

    PubMed Central

    Byun, Bo-Ram; Kim, Yong-Il; Maki, Koutaro; Son, Woo-Sung

    2015-01-01

    This study was aimed to examine the correlation between skeletal maturation status and parameters from the odontoid process/body of the second vertebra and the bodies of third and fourth cervical vertebrae and simultaneously build multiple regression models to be able to estimate skeletal maturation status in Korean girls. Hand-wrist radiographs and cone beam computed tomography (CBCT) images were obtained from 74 Korean girls (6–18 years of age). CBCT-generated cervical vertebral maturation (CVM) was used to demarcate the odontoid process and the body of the second cervical vertebra, based on the dentocentral synchondrosis. Correlation coefficient analysis and multiple linear regression analysis were used for each parameter of the cervical vertebrae (P < 0.05). Forty-seven of 64 parameters from CBCT-generated CVM (independent variables) exhibited statistically significant correlations (P < 0.05). The multiple regression model with the greatest R 2 had six parameters (PH2/W2, UW2/W2, (OH+AH2)/LW2, UW3/LW3, D3, and H4/W4) as independent variables with a variance inflation factor (VIF) of <2. CBCT-generated CVM was able to include parameters from the second cervical vertebral body and odontoid process, respectively, for the multiple regression models. This suggests that quantitative analysis might be used to estimate skeletal maturation status. PMID:25878721

  12. Determination of mechanical stiffness of bone by pQCT measurements: correlation with non-destructive mechanical four-point bending test data.

    PubMed

    Martin, Daniel E; Severns, Anne E; Kabo, J M J Michael

    2004-08-01

    Mechanical tests of bone provide valuable information about material and structural properties important for understanding bone pathology in both clinical and research settings, but no previous studies have produced applicable non-invasive, quantitative estimates of bending stiffness. The goal of this study was to evaluate the effectiveness of using peripheral quantitative computed tomography (pQCT) data to accurately compute the bending stiffness of bone. Normal rabbit humeri (N=8) were scanned at their mid-diaphyses using pQCT. The average bone mineral densities and the cross-sectional moments of inertia were computed from the pQCT cross-sections. Bending stiffness was determined as a function of the elastic modulus of compact bone (based on the local bone mineral density), cross-sectional moment of inertia, and simulated quasistatic strain rate. The actual bending stiffness of the bones was determined using four-point bending tests. Comparison of the bending stiffness estimated from the pQCT data and the mechanical bending stiffness revealed excellent correlation (R2=0.96). The bending stiffness from the pQCT data was on average 103% of that obtained from the four-point bending tests. The results indicate that pQCT data can be used to accurately determine the bending stiffness of normal bone. Possible applications include temporal quantification of fracture healing and risk management of osteoporosis or other bone pathologies.

  13. Limitations of quantitative analysis of deep crustal seismic reflection data: Examples from GLIMPCE

    USGS Publications Warehouse

    Lee, Myung W.; Hutchinson, Deborah R.

    1992-01-01

    Amplitude preservation in seismic reflection data can be obtained by a relative true amplitude (RTA) processing technique in which the relative strength of reflection amplitudes is preserved vertically as well as horizontally, after compensating for amplitude distortion by near-surface effects and propagation effects. Quantitative analysis of relative true amplitudes of the Great Lakes International Multidisciplinary Program on Crustal Evolution seismic data is hampered by large uncertainties in estimates of the water bottom reflection coefficient and the vertical amplitude correction and by inadequate noise suppression. Processing techniques such as deconvolution, F-K filtering, and migration significantly change the overall shape of amplitude curves and hence calculation of reflection coefficients and average reflectance. Thus lithological interpretation of deep crustal seismic data based on the absolute value of estimated reflection strength alone is meaningless. The relative strength of individual events, however, is preserved on curves generated at different stages in the processing. We suggest that qualitative comparisons of relative strength, if used carefully, provide a meaningful measure of variations in reflectivity. Simple theoretical models indicate that peg-leg multiples rather than water bottom multiples are the most severe source of noise contamination. These multiples are extremely difficult to remove when the water bottom reflection coefficient is large (>0.6), a condition that exists beneath parts of Lake Superior and most of Lake Huron.

  14. Stability of Gradient Field Corrections for Quantitative Diffusion MRI.

    PubMed

    Rogers, Baxter P; Blaber, Justin; Welch, E Brian; Ding, Zhaohua; Anderson, Adam W; Landman, Bennett A

    2017-02-11

    In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fields, we predicted the obtained b-values and applied gradient directions throughout a typical field of view for brain imaging for a typical 32-direction diffusion imaging sequence. We measured the stability of these predictions over time. At 80 mm from scanner isocenter, predicted b-value was 1-6% different than intended due to gradient nonlinearity, and predicted gradient directions were in error by up to 1 degree. Over the course of one month the change in these quantities due to calibration-related factors such as scanner drift and variation in phantom placement was <0.5% for b-values, and <0.5 degrees for angular deviation. The proposed calibration procedure allows the estimation of gradient nonlinearity to correct b-values and gradient directions ahead of advanced diffusion image processing for high angular resolution data, and requires only a five-minute phantom scan that can be included in a weekly or monthly quality assurance protocol.

  15. Comb-push ultrasound shear elastography of breast masses: initial results show promise.

    PubMed

    Denis, Max; Mehrmohammadi, Mohammad; Song, Pengfei; Meixner, Duane D; Fazzio, Robert T; Pruthi, Sandhya; Whaley, Dana H; Chen, Shigao; Fatemi, Mostafa; Alizad, Azra

    2015-01-01

    To evaluate the performance of Comb-push Ultrasound Shear Elastography (CUSE) for classification of breast masses. CUSE is an ultrasound-based quantitative two-dimensional shear wave elasticity imaging technique, which utilizes multiple laterally distributed acoustic radiation force (ARF) beams to simultaneously excite the tissue and induce shear waves. Female patients who were categorized as having suspicious breast masses underwent CUSE evaluations prior to biopsy. An elasticity estimate within the breast mass was obtained from the CUSE shear wave speed map. Elasticity estimates of various types of benign and malignant masses were compared with biopsy results. Fifty-four female patients with suspicious breast masses from our ongoing study are presented. Our cohort included 31 malignant and 23 benign breast masses. Our results indicate that the mean shear wave speed was significantly higher in malignant masses (6 ± 1.58 m/s) in comparison to benign masses (3.65 ± 1.36 m/s). Therefore, the stiffness of the mass quantified by the Young's modulus is significantly higher in malignant masses. According to the receiver operating characteristic curve (ROC), the optimal cut-off value of 83 kPa yields 87.10% sensitivity, 82.61% specificity, and 0.88 for the area under the curve (AUC). CUSE has the potential for clinical utility as a quantitative diagnostic imaging tool adjunct to B-mode ultrasound for differentiation of malignant and benign breast masses.

  16. Combining QMRA and Epidemiology to Estimate Campylobacteriosis Incidence.

    PubMed

    Evers, Eric G; Bouwknegt, Martijn

    2016-10-01

    The disease burden of pathogens as estimated by QMRA (quantitative microbial risk assessment) and EA (epidemiological analysis) often differs considerably. This is an unsatisfactory situation for policymakers and scientists. We explored methods to obtain a unified estimate using campylobacteriosis in the Netherlands as an example, where previous work resulted in estimates of 4.9 million (QMRA) and 90,600 (EA) cases per year. Using the maximum likelihood approach and considering EA the gold standard, the QMRA model could produce the original EA estimate by adjusting mainly the dose-infection relationship. Considering QMRA the gold standard, the EA model could produce the original QMRA estimate by adjusting mainly the probability that a gastroenteritis case is caused by Campylobacter. A joint analysis of QMRA and EA data and models assuming identical outcomes, using a frequentist or Bayesian approach (using vague priors), resulted in estimates of 102,000 or 123,000 campylobacteriosis cases per year, respectively. These were close to the original EA estimate, and this will be related to the dissimilarity in data availability. The Bayesian approach further showed that attenuating the condition of equal outcomes immediately resulted in very different estimates of the number of campylobacteriosis cases per year and that using more informative priors had little effect on the results. In conclusion, EA was dominant in estimating the burden of campylobacteriosis in the Netherlands. However, it must be noted that only statistical uncertainties were taken into account here. Taking all, usually difficult to quantify, uncertainties into account might lead to a different conclusion. © 2016 Society for Risk Analysis.

  17. An approach for estimating measurement uncertainty in medical laboratories using data from long-term quality control and external quality assessment schemes.

    PubMed

    Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-10-26

    The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.

  18. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    PubMed

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  19. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks

    PubMed Central

    2010-01-01

    Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862

  20. Improvement in diastolic intraventricular pressure gradients in patients with HOCM after ethanol septal reduction

    NASA Technical Reports Server (NTRS)

    Rovner, Aleksandr; Smith, Rebecca; Greenberg, Neil L.; Tuzcu, E. Murat; Smedira, Nicholas; Lever, Harry M.; Thomas, James D.; Garcia, Mario J.

    2003-01-01

    We sought to validate measurement of intraventricular pressure gradients (IVPG) and analyze their change in patients with hypertrophic obstructive cardiomyopathy (HOCM) after ethanol septal reduction (ESR). Quantitative analysis of color M-mode Doppler (CMM) images may be used to estimate diastolic IVPG noninvasively. Noninvasive IVPG measurement was validated in 10 patients undergoing surgical myectomy. Echocardiograms were then analyzed in 19 patients at baseline and after ESR. Pulsed Doppler data through the mitral valve and pulmonary venous flow were obtained. CMM was used to obtain the flow propagation velocity (Vp) and to calculate IVPG off-line. Left atrial pressure was estimated with the use of previously validated Doppler equations. Data were compared before and after ESR. CMM-derived IVPG correlated well with invasive measurements obtained before and after surgical myectomy [r = 0.8, P < 0.01, Delta(CMM - invasive IVPG) = 0.09 +/- 0.45 mmHg]. ESR resulted in a decrease of resting LVOT systolic gradient from 62 +/- 10 to 29 +/- 5 mmHg (P < 0.001). There was a significant increase in the Vp and IVPG (from 48 +/- 5to 74 +/- 7 cm/s and from 1.5 +/- 0.2 to 2.6 +/- 0.3 mmHg, respectively, P < 0.001 for both). Estimated left atrial pressure decreased from 16.2 +/- 1.1 to 11.5 +/- 0.9 mmHg (P < 0.001). The increase in IVPG correlated with the reduction in the LVOT gradient (r = 0.6, P < 0.01). Reduction of LVOT obstruction after ESR is associated with an improvement in diastolic suction force. Noninvasive measurements of IVPG may be used as an indicator of diastolic function improvement in HOCM.

Top