Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
[Quantitative estimation source of urban atmospheric CO2 by carbon isotope composition].
Liu, Wei; Wei, Nan-Nan; Wang, Guang-Hua; Yao, Jian; Zeng, You-Shi; Fan, Xue-Bo; Geng, Yan-Hong; Li, Yan
2012-04-01
To effectively reduce urban carbon emissions and verify the effectiveness of currently project for urban carbon emission reduction, quantitative estimation sources of urban atmospheric CO2 correctly is necessary. Since little fractionation of carbon isotope exists in the transportation from pollution sources to the receptor, the carbon isotope composition can be used for source apportionment. In the present study, a method was established to quantitatively estimate the source of urban atmospheric CO2 by the carbon isotope composition. Both diurnal and height variations of concentrations of CO2 derived from biomass, vehicle exhaust and coal burning were further determined for atmospheric CO2 in Jiading district of Shanghai. Biomass-derived CO2 accounts for the largest portion of atmospheric CO2. The concentrations of CO2 derived from the coal burning are larger in the night-time (00:00, 04:00 and 20:00) than in the daytime (08:00, 12:00 and 16:00), and increase with the increase of height. Those derived from the vehicle exhaust decrease with the height increase. The diurnal and height variations of sources reflect the emission and transport characteristics of atmospheric CO2 in Jiading district of Shanghai.
EPA's methodology for estimation of inhalation reference concentrations (RfCs) as benchmark estimates of the quantitative dose-response assessment of chronic noncancer toxicity for individual inhaled chemicals.
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
A new mean estimator using auxiliary variables for randomized response models
NASA Astrophysics Data System (ADS)
Ozgul, Nilgun; Cingi, Hulya
2013-10-01
Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.
Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L
2009-05-01
To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.
Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa
2002-08-01
Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.
A set of literature data was used to derive several quantitative structure-activity relationships (QSARs) to predict the rate constants for the microbial reductive dehalogenation of chlorinated aromatics. Dechlorination rate constants for 25 chloroaromatics were corrected for th...
Peters, Susan; Kromhout, Hans; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Vermeulen, Roel
2013-01-01
We describe the elaboration and sensitivity analyses of a quantitative job-exposure matrix (SYN-JEM) for respirable crystalline silica (RCS). The aim was to gain insight into the robustness of the SYN-JEM RCS estimates based on critical decisions taken in the elaboration process. SYN-JEM for RCS exposure consists of three axes (job, region, and year) based on estimates derived from a previously developed statistical model. To elaborate SYN-JEM, several decisions were taken: i.e. the application of (i) a single time trend; (ii) region-specific adjustments in RCS exposure; and (iii) a prior job-specific exposure level (by the semi-quantitative DOM-JEM), with an override of 0 mg/m(3) for jobs a priori defined as non-exposed. Furthermore, we assumed that exposure levels reached a ceiling in 1960 and remained constant prior to this date. We applied SYN-JEM to the occupational histories of subjects from a large international pooled community-based case-control study. Cumulative exposure levels derived with SYN-JEM were compared with those from alternative models, described by Pearson correlation ((Rp)) and differences in unit of exposure (mg/m(3)-year). Alternative models concerned changes in application of job- and region-specific estimates and exposure ceiling, and omitting the a priori exposure ranking. Cumulative exposure levels for the study subjects ranged from 0.01 to 60 mg/m(3)-years, with a median of 1.76 mg/m(3)-years. Exposure levels derived from SYN-JEM and alternative models were overall highly correlated (R(p) > 0.90), although somewhat lower when omitting the region estimate ((Rp) = 0.80) or not taking into account the assigned semi-quantitative exposure level (R(p) = 0.65). Modification of the time trend (i.e. exposure ceiling at 1950 or 1970, or assuming a decline before 1960) caused the largest changes in absolute exposure levels (26-33% difference), but without changing the relative ranking ((Rp) = 0.99). Exposure estimates derived from SYN-JEM appeared to be plausible compared with (historical) levels described in the literature. Decisions taken in the development of SYN-JEM did not critically change the cumulative exposure levels. The influence of region-specific estimates needs to be explored in future risk analyses.
Transcript copy number estimation using a mouse whole-genome oligonucleotide microarray
Carter, Mark G; Sharov, Alexei A; VanBuren, Vincent; Dudekula, Dawood B; Carmack, Condie E; Nelson, Charlie; Ko, Minoru SH
2005-01-01
The ability to quantitatively measure the expression of all genes in a given tissue or cell with a single assay is an exciting promise of gene-expression profiling technology. An in situ-synthesized 60-mer oligonucleotide microarray designed to detect transcripts from all mouse genes was validated, as well as a set of exogenous RNA controls derived from the yeast genome (made freely available without restriction), which allow quantitative estimation of absolute endogenous transcript abundance. PMID:15998450
[Quantitative relationships between hyper-spectral vegetation indices and leaf area index of rice].
Tian, Yong-Chao; Yang, Jie; Yao, Xia; Zhu, Yan; Cao, Wei-Xing
2009-07-01
Based on field experiments with different rice varieties under different nitrogen application levels, the quantitative relationships of rice leaf area index (LAI) with canopy hyper-spectral parameters at different growth stages were analyzed. Rice LAI had good relationships with several hyper-spectral vegetation indices, the correlation coefficient being the highest with DI (difference index), followed by with RI (ratio index), and NI (normalized index), based on the spectral reflectance or the first derivative spectra. The two best spectral indices for estimating LAI were the difference index DI (854, 760) (based on two spectral bands of 850 nm and 760 nm) and the difference index DI (D676, D778) (based on two first derivative bands of 676 nm and 778 nm). In general, the hyper-spectral vegetation indices based on spectral reflectance performed better than the spectral indices based on the first derivative spectra. The tests with independent dataset suggested that the rice LAI monitoring models with difference index DI (854,760) as the variable could give an accurate LAI estimation, being available for estimation of rice LAI.
Gifford, Aliya; Walker, Ronald C.; Towse, Theodore F.; Brian Welch, E.
2015-01-01
Abstract. Beyond estimation of depot volumes, quantitative analysis of adipose tissue properties could improve understanding of how adipose tissue correlates with metabolic risk factors. We investigated whether the fat signal fraction (FSF) derived from quantitative fat–water magnetic resonance imaging (MRI) scans at 3.0 T correlates to CT Hounsfield units (HU) of the same tissue. These measures were acquired in the subcutaneous white adipose tissue (WAT) at the umbilical level of 21 healthy adult subjects. A moderate correlation exists between MRI- and CT-derived WAT values for all subjects, R2=0.54, p<0.0001, with a slope of −2.6, (95% CI [−3.3,−1.8]), indicating that a decrease of 1 HU equals a mean increase of 0.38% FSF. We demonstrate that FSF estimates obtained using quantitative fat–water MRI techniques correlate with CT HU values in subcutaneous WAT, and therefore, MRI-based FSF could be used as an alternative to CT HU for assessing metabolic risk factors. PMID:26702407
Security Events and Vulnerability Data for Cybersecurity Risk Estimation.
Allodi, Luca; Massacci, Fabio
2017-08-01
Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.
Reiffsteck, A; Dehennin, L; Scholler, R
1982-11-01
Estrone, 2-methoxyestrone and estradiol-17 beta have been definitely identified in seminal plasma of man, bull, boar and stallion by high resolution gas chromatography associated with selective monitoring of characteristic ions of suitable derivatives. Quantitative estimations were performed by isotope dilution with deuterated analogues and by monitoring molecular ions of trimethylsilyl ethers of labelled and unlabelled compounds. Concentrations of unconjugated and total estrogens are reported together with the statistical evaluation of accuracy and precision.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Birk, Thomas; Guldner, Karlheinz; Mundt, Kenneth A; Dahmann, Dirk; Adams, Robert C; Parsons, William
2010-09-01
A time-dependent quantitative exposure assessment of silica exposure among nearly 18,000 German porcelain workers was conducted. Results will be used to evaluate exposure-response disease risks. Over 8000 historical industrial hygiene (IH) measurements with original sampling and analysis protocols from 1954-2006 were obtained from the German Berufs- genossenschaft der keramischen-und Glas-Industrie (BGGK) and used to construct a job exposure matrix (JEM). Early measurements from different devices were converted to modern gravimetric equivalent values. Conversion factors were derived from parallel historical measurements and new side-by-side measurements using historical and modern devices in laboratory dust tunnels and active workplace locations. Exposure values were summarized and smoothed using LOESS regression; estimates for early years were derived using backward extrapolation techniques. Employee work histories were merged with JEM values to determine cumulative crystalline silica exposures for cohort members. Average silica concentrations were derived for six primary similar exposure groups (SEGs) for 1938-2006. Over 40% of the cohort accumulated <0.5 mg; just over one-third accumulated >1 mg/m(3)-years. Nearly 5000 workers had cumulative crystalline silica estimates >1.5 mg/m(3)-years. Similar numbers of men and women fell into each cumulative exposure category, except for 1113 women and 1567 men in the highest category. Over half of those hired before 1960 accumulated >3 mg/m(3)-years crystalline silica compared with 4.9% of those hired after 1960. Among those ever working in the materials preparation area, half accumulated >3 mg/m(3)-year compared with 12% of those never working in this area. Quantitative respirable silica exposures were estimated for each member of this cohort, including employment periods for which sampling used now obsolete technologies. Although individual cumulative exposure estimates ranged from background to about 40 mg/m(3)-years, many of these estimates reflect long-term exposures near modern exposure limit values, allowing direct evaluation of lung cancer and silicosis risks near these limits without extrapolation. This quantitative exposure assessment is the largest to date in the porcelain industry.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
Gunawardena, Harsha P; O'Brien, Jonathon; Wrobel, John A; Xie, Ling; Davies, Sherri R; Li, Shunqiang; Ellis, Matthew J; Qaqish, Bahjat F; Chen, Xian
2016-02-01
Single quantitative platforms such as label-based or label-free quantitation (LFQ) present compromises in accuracy, precision, protein sequence coverage, and speed of quantifiable proteomic measurements. To maximize the quantitative precision and the number of quantifiable proteins or the quantifiable coverage of tissue proteomes, we have developed a unified approach, termed QuantFusion, that combines the quantitative ratios of all peptides measured by both LFQ and label-based methodologies. Here, we demonstrate the use of QuantFusion in determining the proteins differentially expressed in a pair of patient-derived tumor xenografts (PDXs) representing two major breast cancer (BC) subtypes, basal and luminal. Label-based in-spectra quantitative peptides derived from amino acid-coded tagging (AACT, also known as SILAC) of a non-malignant mammary cell line were uniformly added to each xenograft with a constant predefined ratio, from which Ratio-of-Ratio estimates were obtained for the label-free peptides paired with AACT peptides in each PDX tumor. A mixed model statistical analysis was used to determine global differential protein expression by combining complementary quantifiable peptide ratios measured by LFQ and Ratio-of-Ratios, respectively. With minimum number of replicates required for obtaining the statistically significant ratios, QuantFusion uses the distinct mechanisms to "rescue" the missing data inherent to both LFQ and label-based quantitation. Combined quantifiable peptide data from both quantitative schemes increased the overall number of peptide level measurements and protein level estimates. In our analysis of the PDX tumor proteomes, QuantFusion increased the number of distinct peptide ratios by 65%, representing differentially expressed proteins between the BC subtypes. This quantifiable coverage improvement, in turn, not only increased the number of measurable protein fold-changes by 8% but also increased the average precision of quantitative estimates by 181% so that some BC subtypically expressed proteins were rescued by QuantFusion. Thus, incorporating data from multiple quantitative approaches while accounting for measurement variability at both the peptide and global protein levels make QuantFusion unique for obtaining increased coverage and quantitative precision for tissue proteomes. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Burstyn, Igor; Boffetta, Paolo; Kauppinen, Timo; Heikkilä, Pirjo; Svane, Ole; Partanen, Timo; Stücker, Isabelle; Frentzel-Beyme, Rainer; Ahrens, Wolfgang; Merzenich, Hiltrud; Heederik, Dick; Hooiveld, Mariëtte; Langård, Sverre; Randem, Britt G; Järvholm, Bengt; Bergdahl, Ingvar; Shaham, Judith; Ribak, Joseph; Kromhout, Hans
2003-01-01
An exposure matrix (EM) for known and suspected carcinogens was required for a multicenter international cohort study of cancer risk and bitumen among asphalt workers. Production characteristics in companies enrolled in the study were ascertained through use of a company questionnaire (CQ). Exposures to coal tar, bitumen fume, organic vapor, polycyclic aromatic hydrocarbons, diesel fume, silica, and asbestos were assessed semi-quantitatively using information from CQs, expert judgment, and statistical models. Exposures of road paving workers to bitumen fume, organic vapor, and benzo(a)pyrene were estimated quantitatively by applying regression models, based on monitoring data, to exposure scenarios identified by the CQs. Exposures estimates were derived for 217 companies enrolled in the cohort, plus the Swedish asphalt paving industry in general. Most companies were engaged in road paving and asphalt mixing, but some also participated in general construction and roofing. Coal tar use was most common in Denmark and The Netherlands, but the practice is now obsolete. Quantitative estimates of exposure to bitumen fume, organic vapor, and benzo(a)pyrene for pavers, and semi-quantitative estimates of exposure to these agents among all subjects were strongly correlated. Semi-quantitative estimates of exposure to bitumen fume and coal tar exposures were only moderately correlated. EM assessed non-monotonic historical decrease in exposures to all agents assessed except silica and diesel exhaust. We produced a data-driven EM using methodology that can be adapted for other multicenter studies. Copyright 2003 Wiley-Liss, Inc.
Optimally weighted least-squares steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2007-02-01
Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.
Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.
2008-08-01
A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.
Nakagawa, Hiroaki; Nagatani, Yukihiro; Takahashi, Masashi; Ogawa, Emiko; Tho, Nguyen Van; Ryujin, Yasushi; Nagao, Taishi; Nakano, Yasutaka
2016-01-01
The 2011 official statement of idiopathic pulmonary fibrosis (IPF) mentions that the extent of honeycombing and the worsening of fibrosis on high-resolution computed tomography (HRCT) in IPF are associated with the increased risk of mortality. However, there are few reports about the quantitative computed tomography (CT) analysis of honeycombing area. In this study, we first proposed a computer-aided method for quantitative CT analysis of honeycombing area in patients with IPF. We then evaluated the correlations between honeycombing area measured by the proposed method with that estimated by radiologists or with parameters of PFTs. Chest HRCTs and pulmonary function tests (PFTs) of 36 IPF patients, who were diagnosed using HRCT alone, were retrospectively evaluated. Two thoracic radiologists independently estimated the honeycombing area as Identified Area (IA) and the percentage of honeycombing area to total lung area as Percent Area (PA) on 3 axial CT slices for each patient. We also developed a computer-aided method to measure the honeycombing area on CT images of those patients. The total honeycombing area as CT honeycombing area (HA) and the percentage of honeycombing area to total lung area as CT %honeycombing area (%HA) were derived from the computer-aided method for each patient. HA derived from three CT slices was significantly correlated with IA (ρ=0.65 for Radiologist 1 and ρ=0.68 for Radiologist 2). %HA derived from three CT slices was also significantly correlated with PA (ρ=0.68 for Radiologist 1 and ρ=0.70 for Radiologist 2). HA and %HA derived from all CT slices were significantly correlated with FVC (%pred.), DLCO (%pred.), and the composite physiologic index (CPI) (HA: ρ=-0.43, ρ=-0.56, ρ=0.63 and %HA: ρ=-0.60, ρ=-0.49, ρ=0.69, respectively). The honeycombing area measured by the proposed computer-aided method was correlated with that estimated by expert radiologists and with parameters of PFTs. This quantitative CT analysis of honeycombing area may be useful and reliable in patients with IPF. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Hinds, P S; Scandrett-Hibden, S; McAulay, L S
1990-04-01
The reliability and validity of qualitative research findings are viewed with scepticism by some scientists. This scepticism is derived from the belief that qualitative researchers give insufficient attention to estimating reliability and validity of data, and the differences between quantitative and qualitative methods in assessing data. The danger of this scepticism is that relevant and applicable research findings will not be used. Our purpose is to describe an evaluative strategy for use with qualitative data, a strategy that is a synthesis of quantitative and qualitative assessment methods. Results of the strategy and factors that influence its use are also described.
NASA Astrophysics Data System (ADS)
Denli, H.; Huang, L.
2008-12-01
Quantitative monitoring of reservoir property changes is essential for safe geologic carbon sequestration. Time-lapse seismic surveys have the potential to effectively monitor fluid migration in the reservoir that causes geophysical property changes such as density, and P- and S-wave velocities. We introduce a novel method for quantitative estimation of seismic velocity changes using time-lapse seismic data. The method employs elastic sensitivity wavefields, which are the derivatives of elastic wavefield with respect to density, P- and S-wave velocities of a target region. We derive the elastic sensitivity equations from analytical differentiations of the elastic-wave equations with respect to seismic-wave velocities. The sensitivity equations are coupled with the wave equations in a way that elastic waves arriving in a target reservoir behave as a secondary source to sensitivity fields. We use a staggered-grid finite-difference scheme with perfectly-matched layers absorbing boundary conditions to simultaneously solve the elastic-wave equations and the elastic sensitivity equations. By elastic-wave sensitivities, a linear relationship between relative seismic velocity changes in the reservoir and time-lapse seismic data at receiver locations can be derived, which leads to an over-determined system of equations. We solve this system of equations using a least- square method for each receiver to obtain P- and S-wave velocity changes. We validate the method using both surface and VSP synthetic time-lapse seismic data for a multi-layered model and the elastic Marmousi model. Then we apply it to the time-lapse field VSP data acquired at the Aneth oil field in Utah. A total of 10.5K tons of CO2 was injected into the oil reservoir between the two VSP surveys for enhanced oil recovery. The synthetic and field data studies show that our new method can quantitatively estimate changes in seismic velocities within a reservoir due to CO2 injection/migration.
NASA Astrophysics Data System (ADS)
Yoshida, Kenichiro; Nishidate, Izumi; Ojima, Nobutoshi; Iwata, Kayoko
2014-01-01
To quantitatively evaluate skin chromophores over a wide region of curved skin surface, we propose an approach that suppresses the effect of the shading-derived error in the reflectance on the estimation of chromophore concentrations, without sacrificing the accuracy of that estimation. In our method, we use multiple regression analysis, assuming the absorbance spectrum as the response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as the predictor variables. The concentrations of melanin and total hemoglobin are determined from the multiple regression coefficients using compensation formulae (CF) based on the diffuse reflectance spectra derived from a Monte Carlo simulation. To suppress the shading-derived error, we investigated three different combinations of multiple regression coefficients for the CF. In vivo measurements with the forearm skin demonstrated that the proposed approach can reduce the estimation errors that are due to shading-derived errors in the reflectance. With the best combination of multiple regression coefficients, we estimated that the ratio of the error to the chromophore concentrations is about 10%. The proposed method does not require any measurements or assumptions about the shape of the subjects; this is an advantage over other studies related to the reduction of shading-derived errors.
Smith, Eric G.
2015-01-01
Background: Nonrandomized studies typically cannot account for confounding from unmeasured factors. Method: A method is presented that exploits the recently-identified phenomenon of “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors. Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure. Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results: Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met. Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations: Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions: To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward. The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226
Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.
2015-01-01
The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532
Migheli, Francesca; Stoccoro, Andrea; Coppedè, Fabio; Wan Omar, Wan Adnan; Failli, Alessandra; Consolini, Rita; Seccia, Massimo; Spisni, Roberto; Miccoli, Paolo; Mathers, John C.; Migliore, Lucia
2013-01-01
There is increasing interest in the development of cost-effective techniques for the quantification of DNA methylation biomarkers. We analyzed 90 samples of surgically resected colorectal cancer tissues for APC and CDKN2A promoter methylation using methylation sensitive-high resolution melting (MS-HRM) and pyrosequencing. MS-HRM is a less expensive technique compared with pyrosequencing but is usually more limited because it gives a range of methylation estimates rather than a single value. Here, we developed a method for deriving single estimates, rather than a range, of methylation using MS-HRM and compared the values obtained in this way with those obtained using the gold standard quantitative method of pyrosequencing. We derived an interpolation curve using standards of known methylated/unmethylated ratio (0%, 12.5%, 25%, 50%, 75%, and 100% of methylation) to obtain the best estimate of the extent of methylation for each of our samples. We observed similar profiles of methylation and a high correlation coefficient between the two techniques. Overall, our new approach allows MS-HRM to be used as a quantitative assay which provides results which are comparable with those obtained by pyrosequencing. PMID:23326336
Quantitative estimation of pesticide-likeness for agrochemical discovery.
Avram, Sorin; Funar-Timofei, Simona; Borota, Ana; Chennamaneni, Sridhar Rao; Manchala, Anil Kumar; Muresan, Sorel
2014-12-01
The design of chemical libraries, an early step in agrochemical discovery programs, is frequently addressed by means of qualitative physicochemical and/or topological rule-based methods. The aim of this study is to develop quantitative estimates of herbicide- (QEH), insecticide- (QEI), fungicide- (QEF), and, finally, pesticide-likeness (QEP). In the assessment of these definitions, we relied on the concept of desirability functions. We found a simple function, shared by the three classes of pesticides, parameterized particularly, for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings. Subsequently, we describe the scoring of each pesticide class by the corresponding quantitative estimate. In a comparative study, we assessed the performance of the scoring functions using extensive datasets of patented pesticides. The hereby-established quantitative assessment has the ability to rank compounds whether they fail well-established pesticide-likeness rules or not, and offer an efficient way to prioritize (class-specific) pesticides. These findings are valuable for the efficient estimation of pesticide-likeness of vast chemical libraries in the field of agrochemical discovery. Graphical AbstractQuantitative models for pesticide-likeness were derived using the concept of desirability functions parameterized for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings.
A statistical framework for protein quantitation in bottom-up MS-based proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less
Non-destructive evaluation of composite materials using ultrasound
NASA Technical Reports Server (NTRS)
Miller, J. G.
1984-01-01
Investigation of the nondestructive evaluation of advanced composite-laminates is summarized. Indices derived from the measurement of fundamental acoustic parameters are used in order to quantitatively estimate the local material properties of the laminate. The following sections describe ongoing studies of phase insensitive attenuation measurements, and discuss several phenomena which influences the previously reported technique of polar backscatter. A simple and effective programmable gate circuit designed for use in estimating attenuation from backscatter is described.
A methodology was developed for deriving quantitative exposure criteria useful for comparing a site or a watershed to a reference condition and for defining the occurrence of extreme exposures. The prototype method used indicators of exposures to oil contamination and combustion ...
A methodology was developed for deriving quantitative exposure criteria useful for comparing a site or a watershed to a reference condition and for defining the occurrence of extreme exposures. The prototype method used indicators of exposures to oil contamination and combustion ...
This paper looks at the impact of enforcement activity on facility-level behavior and derives quantitative estimates of the impact. We measure facility-level behavior as the levels of Biological Oxygen Demand (BOD) and Total Suspended Solids (TSS) pollutant discharges generated b...
Wycherley, Thomas; Ferguson, Megan; O'Dea, Kerin; McMahon, Emma; Liberato, Selma; Brimblecombe, Julie
2016-12-01
Determine how very-remote Indigenous community (RIC) food and beverage (F&B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&B providers. F&B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&B providers in three RICs (NT, Australia). F&B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Combined stores F&B turnover accounted for the majority of F&B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Evaluating combined stores F&B turnover represents an efficient method to estimate total F&B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&B turnover provides an efficient estimate of macronutrient distribution and major food types. © 2016 Public Health Association of Australia.
Unbiased estimation of oceanic mean rainfall from satellite borne radiometer measurements
NASA Technical Reports Server (NTRS)
Mittal, M. C.
1981-01-01
The statistical properties of the radar derived rainfall obtained during the GARP Atlantic Tropical Experiment (GATE) are used to derive quantitative estimates of the spatial and temporal sampling errors associated with estimating rainfall from brightness temperature measurements such as would be obtained from a satelliteborne microwave radiometer employing a practical size antenna aperture. A basis for a method of correcting the so called beam filling problem, i.e., for the effect of nonuniformity of rainfall over the radiometer beamwidth is provided. The method presented employs the statistical properties of the observations themselves without need for physical assumptions beyond those associated with the radiative transfer model. The simulation results presented offer a validation of the estimated accuracy that can be achieved and the graphs included permit evaluation of the effect of the antenna resolution on both the temporal and spatial sampling errors.
NASA Astrophysics Data System (ADS)
Takegami, Shigehiko; Kitamura, Keisuke; Ohsugi, Mayuko; Ito, Aya; Kitade, Tatsuya
2015-06-01
In order to quantitatively examine the lipophilicity of the widely used organophosphorus pesticides (OPs) chlorfenvinphos (CFVP), chlorpyrifos-methyl (CPFM), diazinon (DZN), fenitrothion (FNT), fenthion (FT), isofenphos (IFP), profenofos (PFF) and pyraclofos (PCF), their partition coefficient (Kp) values between phosphatidylcholine (PC) small unilamellar vesicles (SUVs) and water (liposome-water system) were determined by second-derivative spectrophotometry. The second-derivative spectra of these OPs in the presence of PC SUV showed a bathochromic shift according to the increase in PC concentration and distinct derivative isosbestic points, demonstrating the complete elimination of the residual background signal effects that were observed in the absorption spectra. The Kp values were calculated from the second-derivative intensity change induced by addition of PC SUV and obtained with a good precision of R.S.D. below 10%. The Kp values were in the order of CPFM > FT > PFF > PCF > IFP > CFVP > FNT ⩾ DZN and did not show a linear correlation relationship with the reported partition coefficients obtained using an n-octanol-water system (R2 = 0.530). Also, the results quantitatively clarified the effect of chemical-group substitution in OPs on their lipophilicity. Since the partition coefficient for the liposome-water system is more effective for modeling the quantitative structure-activity relationship than that for the n-octanol-water system, the obtained results are toxicologically important for estimating the accumulation of these OPs in human cell membranes.
NASA Astrophysics Data System (ADS)
Zidikheri, Meelis J.; Lucas, Christopher; Potts, Rodney J.
2017-08-01
Airborne volcanic ash is a hazard to aviation. There is an increasing demand for quantitative forecasts of ash properties such as ash mass load to allow airline operators to better manage the risks of flying through airspace likely to be contaminated by ash. In this paper we show how satellite-derived mass load information at times prior to the issuance of the latest forecast can be used to estimate various model parameters that are not easily obtained by other means such as the distribution of mass of the ash column at the volcano. This in turn leads to better forecasts of ash mass load. We demonstrate the efficacy of this approach using several case studies.
Beam wandering of femtosecond laser filament in air.
Yang, Jing; Zeng, Tao; Lin, Lie; Liu, Weiwei
2015-10-05
The spatial wandering of a femtosecond laser filament caused by the filament heating effect in air has been studied. An empirical formula has also been derived from the classical Karman turbulence model, which determines quantitatively the displacement of the beam center as a function of the propagation distance and the effective turbulence structure constant. After fitting the experimental data with this formula, the effective turbulence structure constant has been estimated for a single filament generated in laboratory environment. With this result, one may be able to estimate quantitatively the displacement of a filament over long distance propagation and interpret the practical performance of the experiments assisted by femtosecond laser filamentation, such as remote air lasing, pulse compression, high order harmonic generation (HHG), etc.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.
Pulsar distances and the galactic distribution of free electrons
NASA Technical Reports Server (NTRS)
Taylor, J. H.; Cordes, J. M.
1993-01-01
The present quantitative model for Galactic free electron distribution abandons the assumption of axisymmetry and explicitly incorporates spiral arms; their shapes and locations are derived from existing radio and optical observations of H II regions. The Gum Nebula's dispersion-measure contributions are also explicitly modeled. Adjustable quantities are calibrated by reference to three different types of data. The new model is estimated to furnish distance estimates to known pulsars that are accurate to about 25 percent.
NASA Astrophysics Data System (ADS)
Robertson, K. M.; Milliken, R. E.; Li, S.
2016-10-01
Quantitative mineral abundances of lab derived clay-gypsum mixtures were estimated using a revised Hapke VIS-NIR and Shkuratov radiative transfer model. Montmorillonite-gypsum mixtures were used to test the effectiveness of the model in distinguishing between subtle differences in minor absorption features that are diagnostic of mineralogy in the presence of strong H2O absorptions that are not always diagnostic of distinct phases or mineral abundance. The optical constants (k-values) for both endmembers were determined from bi-directional reflectance spectra measured in RELAB as well as on an ASD FieldSpec3 in a controlled laboratory setting. Multiple size fractions were measured in order to derive a single k-value from optimization of the optical path length in the radiative transfer models. It is shown that with careful experimental conditions, optical constants can be accurately determined from powdered samples using a field spectrometer, consistent with previous studies. Variability in the montmorillonite hydration level increased the uncertainties in the derived k-values, but estimated modal abundances for the mixtures were still within 5% of the measured values. Results suggest that the Hapke model works well in distinguishing between hydrated phases that have overlapping H2O absorptions and it is able to detect gypsum and montmorillonite in these simple mixtures where they are present at levels of ∼10%. Care must be taken however to derive k-values from a sample with appropriate H2O content relative to the modeled spectra. These initial results are promising for the potential quantitative analysis of orbital remote sensing data of hydrated minerals, including more complex clay and sulfate assemblages such as mudstones examined by the Curiosity rover in Gale crater.
Lee, Vinson R.; Blew, Rob M.; Farr, Josh N.; Tomas, Rita; Lohman, Timothy G.; Going, Scott B.
2013-01-01
Objective Assess the utility of peripheral quantitative computed tomography (pQCT) for estimating whole body fat in adolescent girls. Research Methods and Procedures Our sample included 458 girls (aged 10.7 ± 1.1y, mean BMI = 18.5 ± 3.3 kg/m2) who had DXA scans for whole body percent fat (DXA %Fat). Soft tissue analysis of pQCT scans provided thigh and calf subcutaneous percent fat and thigh and calf muscle density (muscle fat content surrogates). Anthropometric variables included weight, height and BMI. Indices of maturity included age and maturity offset. The total sample was split into validation (VS; n = 304) and cross-validation (CS; n = 154) samples. Linear regression was used to develop prediction equations for estimating DXA %Fat from anthropometric variables and pQCT-derived soft tissue components in VS and the best prediction equation was applied to CS. Results Thigh and calf SFA %Fat were positively correlated with DXA %Fat (r = 0.84 to 0.85; p <0.001) and thigh and calf muscle densities were inversely related to DXA %Fat (r = −0.30 to −0.44; p < 0.001). The best equation for estimating %Fat included thigh and calf SFA %Fat and thigh and calf muscle density (adj. R2 = 0.90; SEE = 2.7%). Bland-Altman analysis in CS showed accurate estimates of percent fat (adj. R2 = 0.89; SEE = 2.7%) with no bias. Discussion Peripheral QCT derived indices of adiposity can be used to accurately estimate whole body percent fat in adolescent girls. PMID:25147482
Ordinal Process Dissociation and the Measurement of Automatic and Controlled Processes
ERIC Educational Resources Information Center
Hirshman, Elliot
2004-01-01
The process-dissociation equations (L. Jacoby, 1991) have been applied to results from inclusion and exclusion tasks to derive quantitative estimates of the influence of controlled and automatic processes on memory. This research has provoked controversies (e.g., T. Curran & D. Hintzman, 1995) regarding the validity of specific assumptions…
USDA-ARS?s Scientific Manuscript database
Several organizations have developed prediction models for molecular breeding values (MBV) for quantitative growth and carcass traits in beef cattle using BovineSNP50 genotypes and phenotypic or EBV data. MBV for Angus cattle have been developed by IGENITY, Pfizer Animal Genetics, and a collaboratio...
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
Properties of an entropy-based signal receiver with an application to ultrasonic molecular imaging.
Hughes, M S; McCarthy, J E; Marsh, J N; Arbeit, J M; Neumann, R G; Fuhrhop, R W; Wallace, K D; Znidersic, D R; Maurizi, B N; Baldwin, S L; Lanza, G M; Wickline, S A
2007-06-01
Qualitative and quantitative properties of the finite part, H(f), of the Shannon entropy of a continuous waveform f(t) in the continuum limit are derived in order to illuminate its use for waveform characterization. Simple upper and lower bounds on H(f), based on features of f(t), are defined. Quantitative criteria for a priori estimation of the average-case variation of H(f) and log E(f), where E(f) is the signal energy of f(t) are also derived. These provide relative sensitivity estimates that could be used to prospectively choose optimal imaging strategies in real-time ultrasonic imaging machines, where system bandwidth is often pushed to its limits. To demonstrate the utility of these sensitivity relations for this application, a study designed to assess the feasibility of identification of angiogenic neovasculature targeted with perfluorocarbon nanoparticles that specifically bind to alpha(v)beta3-integrin expression in tumors was performed. The outcome of this study agrees with the prospective sensitivity estimates that were used for the two receivers. Moreover, these data demonstrate the ability of entropy-based signal receivers when used in conjunction with targeted nanoparticles to elucidate the presence of alpha(v)beta3 integrins in primordial neovasculature, particularly in acoustically unfavorable environments.
Development of an agricultural job-exposure matrix for British Columbia, Canada.
Wood, David; Astrakianakis, George; Lang, Barbara; Le, Nhu; Bert, Joel
2002-09-01
Farmers in British Columbia (BC), Canada have been shown to have unexplained elevated proportional mortality rates for several cancers. Because agricultural exposures have never been documented systematically in BC, a quantitative agricultural Job-exposure matrix (JEM) was developed containing exposure assessments from 1950 to 1998. This JEM was developed to document historical exposures and to facilitate future epidemiological studies. Available information regarding BC farming practices was compiled and checklists of potential exposures were produced for each crop. Exposures identified included chemical, biological, and physical agents. Interviews with farmers and agricultural experts were conducted using the checklists as a starting point. This allowed the creation of an initial or 'potential' JEM based on three axes: exposure agent, 'type of work' and time. The 'type of work' axis was determined by combining several variables: region, crop, job title and task. This allowed for a complete description of exposures. Exposure assessments were made quantitatively, where data allowed, or by a dichotomous variable (exposed/unexposed). Quantitative calculations were divided into re-entry and application scenarios. 'Re-entry' exposures were quantified using a standard exposure model with some modification while application exposure estimates were derived using data from the North American Pesticide Handlers Exposure Database (PHED). As expected, exposures differed between crops and job titles both quantitatively and qualitatively. Of the 290 agents included in the exposure axis; 180 were pesticides. Over 3000 estimates of exposure were conducted; 50% of these were quantitative. Each quantitative estimate was at the daily absorbed dose level. Exposure estimates were then rated as high, medium, or low based on comparing them with their respective oral chemical reference dose (RfD) or Acceptable Daily Intake (ADI). This data was mainly obtained from the US Environmental Protection Agency (EPA) Integrated Risk Information System database. Of the quantitative estimates, 74% were rated as low (< 100%) and only 10% were rated as high (>500%). The JEM resulting from this study fills a void concerning exposures for BC farmers and farm workers. While only limited validation of assessments were possible, this JEM can serve as a benchmark for future studies. Preliminary analysis at the BC Cancer Agency (BCCA) using the JEM with prostate cancer records from a large cancer and occupation study/survey has already shown promising results. Development of this JEM provides a useful model for developing historical quantitative exposure estimates where is very little documented information available.
Takegami, Shigehiko; Kitamura, Keisuke; Ohsugi, Mayuko; Ito, Aya; Kitade, Tatsuya
2015-06-15
In order to quantitatively examine the lipophilicity of the widely used organophosphorus pesticides (OPs) chlorfenvinphos (CFVP), chlorpyrifos-methyl (CPFM), diazinon (DZN), fenitrothion (FNT), fenthion (FT), isofenphos (IFP), profenofos (PFF) and pyraclofos (PCF), their partition coefficient (Kp) values between phosphatidylcholine (PC) small unilamellar vesicles (SUVs) and water (liposome-water system) were determined by second-derivative spectrophotometry. The second-derivative spectra of these OPs in the presence of PC SUV showed a bathochromic shift according to the increase in PC concentration and distinct derivative isosbestic points, demonstrating the complete elimination of the residual background signal effects that were observed in the absorption spectra. The Kp values were calculated from the second-derivative intensity change induced by addition of PC SUV and obtained with a good precision of R.S.D. below 10%. The Kp values were in the order of CPFM>FT>PFF>PCF>IFP>CFVP>FNT⩾DZN and did not show a linear correlation relationship with the reported partition coefficients obtained using an n-octanol-water system (R(2)=0.530). Also, the results quantitatively clarified the effect of chemical-group substitution in OPs on their lipophilicity. Since the partition coefficient for the liposome-water system is more effective for modeling the quantitative structure-activity relationship than that for the n-octanol-water system, the obtained results are toxicologically important for estimating the accumulation of these OPs in human cell membranes. Copyright © 2015 Elsevier B.V. All rights reserved.
Mathematical modeling of tetrahydroimidazole benzodiazepine-1-one derivatives as an anti HIV agent
NASA Astrophysics Data System (ADS)
Ojha, Lokendra Kumar
2017-07-01
The goal of the present work is the study of drug receptor interaction via QSAR (Quantitative Structure-Activity Relationship) analysis for 89 set of TIBO (Tetrahydroimidazole Benzodiazepine-1-one) derivatives. MLR (Multiple Linear Regression) method is utilized to generate predictive models of quantitative structure-activity relationships between a set of molecular descriptors and biological activity (IC50). The best QSAR model was selected having a correlation coefficient (r) of 0.9299 and Standard Error of Estimation (SEE) of 0.5022, Fisher Ratio (F) of 159.822 and Quality factor (Q) of 1.852. This model is statistically significant and strongly favours the substitution of sulphur atom, IS i.e. indicator parameter for -Z position of the TIBO derivatives. Two other parameter logP (octanol-water partition coefficient) and SAG (Surface Area Grid) also played a vital role in the generation of best QSAR model. All three descriptor shows very good stability towards data variation in leave-one-out (LOO).
El-Khatib, A H; He, Y; Esteban-Fernández, D; Linscheid, M W
2017-08-01
1,4,7,10-Tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) derivatives are applied in quantitative proteomics owing to their ability to react with different functional groups, to harbor lanthanoides and hence their compatibility with molecular and elemental mass spectrometry. The new DOTA derivatives, namely Ln-MeCAT-Click and Ln-DOTA-Dimedone, allow efficient thiol labeling and targeting sulfenation as an important post-translational modification, respectively. Quantitative applications require the investigation of fragmentation behavior of these reagents. Therefore, the fragmentation behavior of Ln-MeCAT-Click and Ln-DOTA-Dimedone was studied using collision-induced dissociation (CID), infrared multiphoton dissociation (IRMPD) and higher-energy collision dissociation (HCD) using different energy levels, and the efficiency of reporter ion production was estimated. The efficiency of characteristic fragment formation was in the order IRMPD > HCD (normal energy level) > CID. On the other hand, the application of HCD at high energy levels (HCD@HE; NCE > 250%) resulted in a significant increase in reporter ion production (33-54%). This new strategy was successfully applied to generate label-specific reporter ions for DOTA amino labeling at the N-termini and in a quantitative fashion for the estimation of amino:thiol ratio in peptides. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Camomilla, Valentina; Cereatti, Andrea; Cutti, Andrea Giovanni; Fantozzi, Silvia; Stagni, Rita; Vannozzi, Giuseppe
2017-08-18
Quantitative gait analysis can provide a description of joint kinematics and dynamics, and it is recognized as a clinically useful tool for functional assessment, diagnosis and intervention planning. Clinically interpretable parameters are estimated from quantitative measures (i.e. ground reaction forces, skin marker trajectories, etc.) through biomechanical modelling. In particular, the estimation of joint moments during motion is grounded on several modelling assumptions: (1) body segmental and joint kinematics is derived from the trajectories of markers and by modelling the human body as a kinematic chain; (2) joint resultant (net) loads are, usually, derived from force plate measurements through a model of segmental dynamics. Therefore, both measurement errors and modelling assumptions can affect the results, to an extent that also depends on the characteristics of the motor task analysed (i.e. gait speed). Errors affecting the trajectories of joint centres, the orientation of joint functional axes, the joint angular velocities, the accuracy of inertial parameters and force measurements (concurring to the definition of the dynamic model), can weigh differently in the estimation of clinically interpretable joint moments. Numerous studies addressed all these methodological aspects separately, but a critical analysis of how these aspects may affect the clinical interpretation of joint dynamics is still missing. This article aims at filling this gap through a systematic review of the literature, conducted on Web of Science, Scopus and PubMed. The final objective is hence to provide clear take-home messages to guide laboratories in the estimation of joint moments for the clinical practice.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris
2016-04-01
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
Linking interseismic deformation with coseismic slip using dynamic rupture simulations
NASA Astrophysics Data System (ADS)
Yang, H.; He, B.; Weng, H.
2017-12-01
The largest earthquakes on earth occur at subduction zones, sometimes accompanied by devastating tsunamis. Reducing losses from megathrust earthquakes and tsunami demands accurate estimate of rupture scenarios for future earthquakes. Interseismic locking distribution derived from geodetic observations is often used to qualitatively evaluate future earthquake potential. However, how to quantitatively estimate the coseismic slip from the locking distribution remains challenging. Here we derive the coseismic rupture process of the 2012 Mw 7.6 Nicoya, Costa Rica, earthquake from interseismic locking distribution using spontaneous rupture simulation. We construct a three-dimensional elastic medium with a curved fault, which is governed by the linear slip-weakening law. The initial stress on the fault is set based on the build-up stress inferred from locking and the dynamic friction coefficient from fast-speed sliding experiments. Our numerical results of coseismic slip distribution, moment rate function and final earthquake moment are well consistent with those derived from seismic and geodetic observations. Furthermore, we find that the epicentral locations affect rupture scenarios and may lead to various sizes of earthquakes given the heterogeneous stress distribution. In the Nicoya region, less than half of rupture initiation regions where the locking degree is greater than 0.6 can develop into large earthquakes (Mw > 7.2). The results of location-dependent earthquake magnitudes underscore the necessity of conducting a large number of simulations to quantitatively evaluate seismic hazard from the interseismic locking models.
Tiyip, Tashpolat; Ding, Jianli; Zhang, Dong; Liu, Wei; Wang, Fei; Tashpolat, Nigara
2017-01-01
Effective pretreatment of spectral reflectance is vital to model accuracy in soil parameter estimation. However, the classic integer derivative has some disadvantages, including spectral information loss and the introduction of high-frequency noise. In this paper, the fractional order derivative algorithm was applied to the pretreatment and partial least squares regression (PLSR) was used to assess the clay content of desert soils. Overall, 103 soil samples were collected from the Ebinur Lake basin in the Xinjiang Uighur Autonomous Region of China, and used as data sets for calibration and validation. Following laboratory measurements of spectral reflectance and clay content, the raw spectral reflectance and absorbance data were treated using the fractional derivative order from the 0.0 to the 2.0 order (order interval: 0.2). The ratio of performance to deviation (RPD), determinant coefficients of calibration (Rc2), root mean square errors of calibration (RMSEC), determinant coefficients of prediction (Rp2), and root mean square errors of prediction (RMSEP) were applied to assess the performance of predicting models. The results showed that models built on the fractional derivative order performed better than when using the classic integer derivative. Comparison of the predictive effects of 22 models for estimating clay content, calibrated by PLSR, showed that those models based on the fractional derivative 1.8 order of spectral reflectance (Rc2 = 0.907, RMSEC = 0.425%, Rp2 = 0.916, RMSEP = 0.364%, and RPD = 2.484 ≥ 2.000) and absorbance (Rc2 = 0.888, RMSEC = 0.446%, Rp2 = 0.918, RMSEP = 0.383% and RPD = 2.511 ≥ 2.000) were most effective. Furthermore, they performed well in quantitative estimations of the clay content of soils in the study area. PMID:28934274
Snellings, André; Sagher, Oren; Anderson, David J; Aldridge, J Wayne
2009-10-01
The authors developed a wavelet-based measure for quantitative assessment of neural background activity during intraoperative neurophysiological recordings so that the boundaries of the subthalamic nucleus (STN) can be more easily localized for electrode implantation. Neural electrophysiological data were recorded in 14 patients (20 tracks and 275 individual recording sites) with dopamine-sensitive idiopathic Parkinson disease during the target localization portion of deep brain stimulator implantation surgery. During intraoperative recording, the STN was identified based on audio and visual monitoring of neural firing patterns, kinesthetic tests, and comparisons between neural behavior and the known characteristics of the target nucleus. The quantitative wavelet-based measure was applied offline using commercially available software to measure the magnitude of the neural background activity, and the results of this analysis were compared with the intraoperative conclusions. Wavelet-derived estimates were also compared with power spectral density measurements. The wavelet-derived background levels were significantly higher in regions encompassed by the clinically estimated boundaries of the STN than in the surrounding regions (STN, 225 +/- 61 microV; ventral to the STN, 112 +/- 32 microV; and dorsal to the STN, 136 +/- 66 microV). In every track, the absolute maximum magnitude was found within the clinically identified STN. The wavelet-derived background levels provided a more consistent index with less variability than measurements with power spectral density. Wavelet-derived background activity can be calculated quickly, does not require spike sorting, and can be used to identify the STN reliably with very little subjective interpretation required. This method may facilitate the rapid intraoperative identification of STN borders.
Snellings, André; Sagher, Oren; Anderson, David J.; Aldridge, J. Wayne
2016-01-01
Object A wavelet-based measure was developed to quantitatively assess neural background activity taken during surgical neurophysiological recordings to localize the boundaries of the subthalamic nucleus during target localization for deep brain stimulator implant surgery. Methods Neural electrophysiological data was recorded from 14 patients (20 tracks, n = 275 individual recording sites) with dopamine-sensitive idiopathic Parkinson’s disease during the target localization portion of deep brain stimulator implant surgery. During intraoperative recording the STN was identified based upon audio and visual monitoring of neural firing patterns, kinesthetic tests, and comparisons between neural behavior and known characteristics of the target nucleus. The quantitative wavelet-based measure was applied off-line using MATLAB software to measure the magnitude of the neural background activity, and the results of this analysis were compared to the intraoperative conclusions. Wavelet-derived estimates were compared to power spectral density measures. Results The wavelet-derived background levels were significantly higher in regions encompassed by the clinically estimated boundaries of the STN than in surrounding regions (STN: 225 ± 61 μV vs. ventral to STN: 112 ± 32 μV, and dorsal to STN: 136 ± 66 μV). In every track, the absolute maximum magnitude was found within the clinically identified STN. The wavelet-derived background levels provided a more consistent index with less variability than power spectral density. Conclusions The wavelet-derived background activity assessor can be calculated quickly, requires no spike sorting, and can be reliably used to identify the STN with very little subjective interpretation required. This method may facilitate rapid intraoperative identification of subthalamic nucleus borders. PMID:19344225
New service interface for River Forecasting Center derived quantitative precipitation estimates
Blodgett, David L.
2013-01-01
For more than a decade, the National Weather Service (NWS) River Forecast Centers (RFCs) have been estimating spatially distributed rainfall by applying quality-control procedures to radar-indicated rainfall estimates in the eastern United States and other best practices in the western United States to producea national Quantitative Precipitation Estimate (QPE) (National Weather Service, 2013). The availability of archives of QPE information for analytical purposes has been limited to manual requests for access to raw binary file formats that are difficult for scientists who are not in the climatic sciences to work with. The NWS provided the QPE archives to the U.S. Geological Survey (USGS), and the contents of the real-time feed from the RFCs are being saved by the USGS for incorporation into the archives. The USGS has applied time-series aggregation and added latitude-longitude coordinate variables to publish the RFC QPE data. Web services provide users with direct (index-based) data access, rendered visualizations of the data, and resampled raster representations of the source data in common geographic information formats.
NASA Astrophysics Data System (ADS)
Saetchnikov, Anton; Skakun, Victor; Saetchnikov, Vladimir; Tcherniavskaia, Elina; Ostendorf, Andreas
2017-10-01
An approach for the automated whispering gallery mode (WGM) signal decomposition and its parameter estimation is discussed. The algorithm is based on the peak picking and can be applied for the preprocessing of the raw signal acquired from the multiplied WGM-based biosensing chips. Quantitative estimations representing physically meaningful parameters of the external disturbing factors on the WGM spectral shape are the output values. Derived parameters can be directly applied to the further deep qualitative and quantitative interpretations of the sensed disturbing factors. The algorithm is tested on both simulated and experimental data taken from the bovine serum albumin biosensing task. The proposed solution is expected to be a useful contribution to the preprocessing phase of the complete data analysis engine and is expected to push the WGM technology toward the real-live sensing nanobiophotonics.
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
Henshall, John M; Dierens, Leanne; Sellars, Melony J
2014-09-02
While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.
Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E
2014-05-01
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.
Goudet, Jérôme; Büchi, Lucie
2006-02-01
To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) < F(ST) is predicted under homogenizing selection. However, nonadditive effects can alter these predictions. Here, we investigate the influence of dominance on the relation between Q(ST) and F(ST) for neutral traits. Using analytical results and computer simulations, we show that dominance generally deflates Q(ST) relative to F(ST). Under inbreeding, the effect of dominance vanishes, and we show that for selfing species, a better estimate of Q(ST) is obtained from selfed families than from half-sib families. We also compare several sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.
Lightning charge moment changes estimated by high speed photometric observations from ISS
NASA Astrophysics Data System (ADS)
Hobara, Y.; Kono, S.; Suzuki, K.; Sato, M.; Takahashi, Y.; Adachi, T.; Ushio, T.; Suzuki, M.
2017-12-01
Optical observations by the CCD camera using the orbiting satellite is generally used to derive the spatio-temporal global distributions of the CGs and ICs. However electrical properties of the lightning such as peak current and lightning charge are difficult to obtain from the space. In particular, CGs with considerably large lightning charge moment changes (CMC) and peak currents are crucial parameters to generate red sprites and elves, respectively, and so it must be useful to obtain these parameters from space. In this paper, we obtained the lightning optical signatures by using high speed photometric observations from the International Space Station GLIMS (Global Lightning and Sprit MeasurementS JEM-EF) mission. These optical signatures were compared quantitatively with radio signatures recognized as truth values derived from ELF electromagnetic wave observations on the ground to verify the accuracy of the optically derived values. High correlation (R > 0.9) was obtained between lightning optical irradiance and current moment, and quantitative relational expression between these two parameters was derived. Rather high correlation (R > 0.7) was also obtained between the integrated irradiance and the lightning CMC. Our results indicate the possibility to derive lightning electrical properties (current moment and CMC) from optical measurement from space. Moreover, we hope that these results will also contribute to forthcoming French microsatellite mission TARANIS.
Investigation of BOLD fMRI Resonance Frequency Shifts and Quantitative Susceptibility Changes at 7 T
Bianciardi, Marta; van Gelderen, Peter; Duyn, Jeff H.
2013-01-01
Although blood oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) experiments of brain activity generally rely on the magnitude of the signal, they also provide frequency information that can be derived from the phase of the signal. However, because of confounding effects of instrumental and physiological origin, BOLD related frequency information is difficult to extract and therefore rarely used. Here, we explored the use of high field (7 T) and dedicated signal processing methods to extract frequency information and use it to quantify and interpret blood oxygenation and blood volume changes. We found that optimized preprocessing improves detection of task-evoked and spontaneous changes in phase signals and resonance frequency shifts over large areas of the cortex with sensitivity comparable to that of magnitude signals. Moreover, our results suggest the feasibility of mapping BOLD quantitative susceptibility changes in at least part of the activated area and its largest draining veins. Comparison with magnitude data suggests that the observed susceptibility changes originate from neuronal activity through induced blood volume and oxygenation changes in pial and intracortical veins. Further, from frequency shifts and susceptibility values, we estimated that, relative to baseline, the fractional oxygen saturation in large vessels increased by 0.02–0.05 during stimulation, which is consistent to previously published estimates. Together, these findings demonstrate that valuable information can be derived from fMRI imaging of BOLD frequency shifts and quantitative susceptibility changes. PMID:23897623
Background controlled QTL mapping in pure-line genetic populations derived from four-way crosses
Zhang, S; Meng, L; Wang, J; Zhang, L
2017-01-01
Pure lines derived from multiple parents are becoming more important because of the increased genetic diversity, the possibility to conduct replicated phenotyping trials in multiple environments and potentially high mapping resolution of quantitative trait loci (QTL). In this study, we proposed a new mapping method for QTL detection in pure-line populations derived from four-way crosses, which is able to control the background genetic variation through a two-stage mapping strategy. First, orthogonal variables were created for each marker and used in an inclusive linear model, so as to completely absorb the genetic variation in the mapping population. Second, inclusive composite interval mapping approach was implemented for one-dimensional scanning, during which the inclusive linear model was employed to control the background variation. Simulation studies using different genetic models demonstrated that the new method is efficient when considering high detection power, low false discovery rate and high accuracy in estimating quantitative trait loci locations and effects. For illustration, the proposed method was applied in a reported wheat four-way recombinant inbred line population. PMID:28722705
Background controlled QTL mapping in pure-line genetic populations derived from four-way crosses.
Zhang, S; Meng, L; Wang, J; Zhang, L
2017-10-01
Pure lines derived from multiple parents are becoming more important because of the increased genetic diversity, the possibility to conduct replicated phenotyping trials in multiple environments and potentially high mapping resolution of quantitative trait loci (QTL). In this study, we proposed a new mapping method for QTL detection in pure-line populations derived from four-way crosses, which is able to control the background genetic variation through a two-stage mapping strategy. First, orthogonal variables were created for each marker and used in an inclusive linear model, so as to completely absorb the genetic variation in the mapping population. Second, inclusive composite interval mapping approach was implemented for one-dimensional scanning, during which the inclusive linear model was employed to control the background variation. Simulation studies using different genetic models demonstrated that the new method is efficient when considering high detection power, low false discovery rate and high accuracy in estimating quantitative trait loci locations and effects. For illustration, the proposed method was applied in a reported wheat four-way recombinant inbred line population.
Use of epidemiologic data in Integrated Risk Information System (IRIS) assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Persad, Amanda S.; Cooper, Glinda S.
2008-11-15
In human health risk assessment, information from epidemiologic studies is typically utilized in the hazard identification step of the risk assessment paradigm. However, in the assessment of many chemicals by the Integrated Risk Information System (IRIS), epidemiologic data, both observational and experimental, have also been used in the derivation of toxicological risk estimates (i.e., reference doses [RfD], reference concentrations [RfC], oral cancer slope factors [CSF] and inhalation unit risks [IUR]). Of the 545 health assessments posted on the IRIS database as of June 2007, 44 assessments derived non-cancer or cancer risk estimates based on human data. RfD and RfC calculationsmore » were based on a spectrum of endpoints from changes in enzyme activity to specific neurological or dermal effects. There are 12 assessments with IURs based on human data, two assessments that extrapolated human inhalation data to derive CSFs and one that used human data to directly derive a CSF. Lung or respiratory cancer is the most common endpoint for cancer assessments based on human data. To date, only one chemical, benzene, has utilized human data for derivation of all three quantitative risk estimates (i.e., RfC, RfD, and dose-response modeling for cancer assessment). Through examples from the IRIS database, this paper will demonstrate how epidemiologic data have been used in IRIS assessments for both adding to the body of evidence in the hazard identification process and in the quantification of risk estimates in the dose-response component of the risk assessment paradigm.« less
Detecting Anisotropic Inclusions Through EIT
NASA Astrophysics Data System (ADS)
Cristina, Jan; Päivärinta, Lassi
2017-12-01
We study the evolution equation {partialtu=-Λtu} where {Λt} is the Dirichlet-Neumann operator of a decreasing family of Riemannian manifolds with boundary {Σt}. We derive a lower bound for the solution of such an equation, and apply it to a quantitative density estimate for the restriction of harmonic functions on M}=Σ_{0 to the boundaries of {partialΣt}. Consequently we are able to derive a lower bound for the difference of the Dirichlet-Neumann maps in terms of the difference of a background metrics g and an inclusion metric {g+χ_{Σ}(h-g)} on a manifold M.
Influence of mom and dad: quantitative genetic models for maternal effects and genomic imprinting.
Santure, Anna W; Spencer, Hamish G
2006-08-01
The expression of an imprinted gene is dependent on the sex of the parent it was inherited from, and as a result reciprocal heterozygotes may display different phenotypes. In contrast, maternal genetic terms arise when the phenotype of an offspring is influenced by the phenotype of its mother beyond the direct inheritance of alleles. Both maternal effects and imprinting may contribute to resemblance between offspring of the same mother. We demonstrate that two standard quantitative genetic models for deriving breeding values, population variances and covariances between relatives, are not equivalent when maternal genetic effects and imprinting are acting. Maternal and imprinting effects introduce both sex-dependent and generation-dependent effects that result in differences in the way additive and dominance effects are defined for the two approaches. We use a simple example to demonstrate that both imprinting and maternal genetic effects add extra terms to covariances between relatives and that model misspecification may over- or underestimate true covariances or lead to extremely variable parameter estimation. Thus, an understanding of various forms of parental effects is essential in correctly estimating quantitative genetic variance components.
Linkage disequilibrium interval mapping of quantitative trait loci.
Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte
2006-03-16
For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2018-02-01
Small scale rainfall variability is a key factor driving runoff response in fast responding systems, such as mountainous, urban and arid catchments. In this paper, the spatial-temporal autocorrelation structure of convective rainfall is derived with extremely high resolutions (60 m, 1 min) using estimates from an X-Band weather radar recently installed in a semiarid-arid area. The 2-dimensional spatial autocorrelation of convective rainfall fields and the temporal autocorrelation of point-wise and distributed rainfall fields are examined. The autocorrelation structures are characterized by spatial anisotropy, correlation distances 1.5-2.8 km and rarely exceeding 5 km, and time-correlation distances 1.8-6.4 min and rarely exceeding 10 min. The observed spatial variability is expected to negatively affect estimates from rain gauges and microwave links rather than satellite and C-/S-Band radars; conversely, the temporal variability is expected to negatively affect remote sensing estimates rather than rain gauges. The presented results provide quantitative information for stochastic weather generators, cloud-resolving models, dryland hydrologic and agricultural models, and multi-sensor merging techniques.
Rayne, Sierra; Forest, Kaya; Friesen, Ken J
2009-08-01
A quantitative structure-activity model has been validated for estimating congener specific gas-phase hydroxyl radical reaction rates for perfluoroalkyl sulfonic acids (PFSAs), carboxylic acids (PFCAs), aldehydes (PFAls) and dihydrates, fluorotelomer olefins (FTOls), alcohols (FTOHs), aldehydes (FTAls), and acids (FTAcs), and sulfonamides (SAs), sulfonamidoethanols (SEs), and sulfonamido carboxylic acids (SAAs), and their alkylated derivatives based on calculated semi-empirical PM6 method ionization potentials. Corresponding gas-phase reaction rates with nitrate radicals and ozone have also been estimated using the computationally derived ionization potentials. Henry's law constants for these classes of perfluorinated compounds also appear to be reasonably approximated by the SPARC software program, thereby allowing estimation of wet and dry atmospheric deposition rates. Both congener specific gas-phase atmospheric and air-water interface fractionation of these compounds is expected, complicating current source apportionment perspectives and necessitating integration of such differential partitioning influences into future multimedia models. The findings will allow development and refinement of more accurate and detailed local through global scale atmospheric models for the atmospheric fate of perfluoroalkyl compounds.
Friesen, Melissa C; Bassig, Bryan A; Vermeulen, Roel; Shu, Xiao-Ou; Purdue, Mark P; Stewart, Patricia A; Xiang, Yong-Bing; Chow, Wong-Ho; Ji, Bu-Tian; Yang, Gong; Linet, Martha S; Hu, Wei; Gao, Yu-Tang; Zheng, Wei; Rothman, Nathaniel; Lan, Qing
2017-01-01
To provide insight into the contributions of exposure measurements to job exposure matrices (JEMs), we examined the robustness of an association between occupational benzene exposure and non-Hodgkin lymphoma (NHL) to varying exposure assessment methods. NHL risk was examined in a prospective population-based cohort of 73087 women in Shanghai. A mixed-effects model that combined a benzene JEM with >60000 short-term, area benzene inspection measurements was used to derive two sets of measurement-based benzene estimates: 'job/industry-specific' estimates (our presumed best approach) were derived from the model's fixed effects (year, JEM intensity rating) and random effects (occupation, industry); 'calibrated JEM' estimates were derived using only the fixed effects. 'Uncalibrated JEM' (using the ordinal JEM ratings) and exposure duration estimates were also calculated. Cumulative exposure for each subject was calculated for each approach based on varying exposure definitions defined using the JEM's probability ratings. We examined the agreement between the cumulative metrics and evaluated changes in the benzene-NHL associations. For our primary exposure definition, the job/industry-specific estimates were moderately to highly correlated with all other approaches (Pearson correlation 0.61-0.89; Spearman correlation > 0.99). All these metrics resulted in statistically significant exposure-response associations for NHL, with negligible gain in model fit from using measurement-based estimates. Using more sensitive or specific exposure definitions resulted in elevated but non-significant associations. The robust associations observed here with varying benzene assessment methods provide support for a benzene-NHL association. While incorporating exposure measurements did not improve model fit, the measurements allowed us to derive quantitative exposure-response curves. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Danielson, Michelle E.; Beck, Thomas J.; Karlamangla, Arun S.; Greendale, Gail A.; Atkinson, Elizabeth J.; Lian, Yinjuan; Khaled, Alia S.; Keaveny, Tony M.; Kopperdahl, David; Ruppert, Kristine; Greenspan, Susan; Vuga, Marike; Cauley, Jane A.
2013-01-01
Purpose Simple 2-dimensional (2D) analyses of bone strength can be done with dual energy x-ray absorptiometry (DXA) data and applied to large data sets. We compared 2D analyses to 3-dimensional (3D) finite element analyses (FEA) based on quantitative computed tomography (QCT) data. Methods 213 women participating in the Study of Women’s Health across the Nation (SWAN) received hip DXA and QCT scans. DXA BMD and femoral neck diameter and axis length were used to estimate geometry for composite bending (BSI) and compressive strength (CSI) indices. These and comparable indices computed by Hip Structure Analysis (HSA) on the same DXA data were compared to indices using QCT geometry. Simple 2D engineering simulations of a fall impacting on the greater trochanter were generated using HSA and QCT femoral neck geometry; these estimates were benchmarked to a 3D FEA of fall impact. Results DXA-derived CSI and BSI computed from BMD and by HSA correlated well with each other (R= 0.92 and 0.70) and with QCT-derived indices (R= 0.83–0.85 and 0.65–0.72). The 2D strength estimate using HSA geometry correlated well with that from QCT (R=0.76) and with the 3D FEA estimate (R=0.56). Conclusions Femoral neck geometry computed by HSA from DXA data corresponds well enough to that from QCT for an analysis of load stress in the larger SWAN data set. Geometry derived from BMD data performed nearly as well. Proximal femur breaking strength estimated from 2D DXA data is not as well correlated with that derived by a 3D FEA using QCT data. PMID:22810918
NASA Astrophysics Data System (ADS)
Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.
2011-12-01
Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... External Review Draft of the Guidance for Applying Quantitative Data To Develop Data-Derived Extrapolation... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies...
Estimation of fatigue strength enhancement for carburized and shot-peened gears
NASA Astrophysics Data System (ADS)
Inoue, Katsumi; Kato, Masana
1994-05-01
An experimental formula has been proposed to estimate the bending fatigue strength of carburized gears from the hardness and the residual stress. The derivation of the formula is briefly reviewed, and the effectiveness of the formula is demonstrated in this article. The comparison with many test results for carburized and shot-peened gears verifies that the formula is effective for the approximate estimation of the fatigue strength. The formula quantitatively shows a way of enhancing fatigue strength, i.e., the increase of hardness and residual stress at the fillet. The strength is enhanced about 300 MPa by an appropriate shot peening, and it can be improved still more by the surface removal by electropolishing.
Quantitative imaging of peripheral trabecular bone microarchitecture using MDCT.
Chen, Cheng; Zhang, Xiaoliu; Guo, Junfeng; Jin, Dakai; Letuchy, Elena M; Burns, Trudy L; Levy, Steven M; Hoffman, Eric A; Saha, Punam K
2018-01-01
Osteoporosis associated with reduced bone mineral density (BMD) and microarchitectural changes puts patients at an elevated risk of fracture. Modern multidetector row CT (MDCT) technology, producing high spatial resolution at increasingly lower dose radiation, is emerging as a viable modality for trabecular bone (Tb) imaging. Wide variation in CT scanners raises concerns of data uniformity in multisite and longitudinal studies. A comprehensive cadaveric study was performed to evaluate MDCT-derived Tb microarchitectural measures. A human pilot study was performed comparing continuity of Tb measures estimated from two MDCT scanners with significantly different image resolution features. Micro-CT imaging of cadaveric ankle specimens (n=25) was used to examine the validity of MDCT-derived Tb microarchitectural measures. Repeat scan reproducibility of MDCT-based Tb measures and their ability to predict mechanical properties were examined. To assess multiscanner data continuity of Tb measures, the distal tibias of 20 volunteers (age:26.2±4.5Y,10F) were scanned using the Siemens SOMATOM Definition Flash and the higher resolution Siemens SOMATOM Force scanners with an average 45-day time gap between scans. The correlation of Tb measures derived from the two scanners over 30% and 60% peel regions at the 4% to 8% of distal tibia was analyzed. MDCT-based Tb measures characterizing bone network area density, plate-rod microarchitecture, and transverse trabeculae showed good correlations (r∈0.85,0.92) with the gold standard micro-CT-derived values of matching Tb measures. However, other MDCT-derived Tb measures characterizing trabecular thickness and separation, erosion index, and structure model index produced weak correlation (r<0.8) with their micro-CT-derived values. Most MDCT Tb measures were found repeatable (ICC∈0.94,0.98). The Tb plate-width measure showed a strong correlation (r = 0.89) with experimental yield stress, while the transverse trabecular measure produced the highest correlation (r = 0.81) with Young's modulus. The data continuity experiment showed that, despite significant differences in image resolution between two scanners (10% MTF along xy-plane and z-direction - Flash: 16.2 and 17.9 lp/cm; Force: 24.8 and 21.0 lp/cm), most Tb measures had high Pearson correlations (r > 0.95) between values estimated from the two scanners. Relatively lower correlation coefficients were observed for the bone network area density (r = 0.91) and Tb separation (r = 0.93) measures. Most MDCT-derived Tb microarchitectural measures are reproducible and their values derived from two scanners strongly correlate with each other as well as with bone strength. This study has highlighted those MDCT-derived measures which show the greatest promise for characterization of bone network area density, plate-rod and transverse trabecular distributions with a good correlation (r ≥ 0.85) compared with their micro-CT-derived values. At the same time, other measures representing trabecular thickness and separation, erosion index, and structure model index produced weak correlations (r < 0.8) with their micro-CT-derived values, failing to accurately portray the projected trabecular microarchitectural features. Strong correlations of Tb measures estimated from two scanners suggest that image data from different scanners can be used successfully in multisite and longitudinal studies with linear calibration required for some measures. In summary, modern MDCT scanners are suitable for effective quantitative imaging of peripheral Tb microarchitecture if care is taken to focus on appropriate quantitative metrics. © 2017 American Association of Physicists in Medicine.
Dynamical Stochastic Processes of Returns in Financial Markets
NASA Astrophysics Data System (ADS)
Kim, Kyungsik; Kim, Soo Yong; Lim, Gyuchang; Zhou, Junyuan; Yoon, Seung-Min
2006-03-01
We show how the evolution of probability distribution functions of the returns from the tick data of the Korean treasury bond futures (KTB) and the S&P 500 stock index can be described by means of the Fokker-Planck equation. We derive the Fokker- Planck equation from the estimated Kramers-Moyal coefficients estimated directly from the empirical data. By analyzing the statistics of the returns, we present the quantitative deterministic and random influences on both financial time series, for which we can give a simple physical interpretation. Finally, we remark that the diffusion coefficient should be significantly considered to make a portfolio.
NASA Astrophysics Data System (ADS)
Xie, Pingping; Joyce, Robert; Wu, Shaorong
2015-04-01
As reported at the EGU General Assembly of 2014, a prototype system was developed for the second generation CMORPH to produce global analyses of 30-min precipitation on a 0.05olat/lon grid over the entire globe from pole to pole through integration of information from satellite observations as well as numerical model simulations. The second generation CMORPH is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) as well as LEO platforms, and precipitation simulations from numerical global models. Key to the success of the 2nd generation CMORPH, among a couple of other elements, are the development of a LEO-IR based precipitation estimation to fill in the polar gaps and objectively analyzed cloud motion vectors to capture the cloud movements of various spatial scales over the entire globe. In this presentation, we report our recent work on the refinement for these two important algorithm components. The prototype algorithm for the LEO IR precipitation estimation is refined to achieve improved quantitative accuracy and consistency with PMW retrievals. AVHRR IR TBB data from all LEO satellites are first remapped to a 0.05olat/lon grid over the entire globe and in a 30-min interval. Temporally and spatially co-located data pairs of the LEO TBB and inter-calibrated combined satellite PMW retrievals (MWCOMB) are then collected to construct tables. Precipitation at a grid box is derived from the TBB through matching the PDF tables for the TBB and the MWCOMB. This procedure is implemented for different season, latitude band and underlying surface types to account for the variations in the cloud - precipitation relationship. At the meantime, a sub-system is developed to construct analyzed fields of cloud motion vectors from the GEO/LEO IR based precipitation estimates and the CFS Reanalysis (CFSR) precipitation fields. Motion vectors are first derived separately from the satellite IR based precipitation estimates and the CFSR precipitation fields. These individually derived motion vectors are then combined through a 2D-VAR technique to form an analyzed field of cloud motion vectors over the entire globe. Error function is experimented to best reflect the performance of the satellite IR based estimates and the CFSR in capturing the movements of precipitating cloud systems over different regions and for different seasons. Quantitative experiments are conducted to optimize the LEO IR based precipitation estimation technique and the 2D-VAR based motion vector analysis system. Detailed results will be reported at the EGU.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
Game, Madhuri D.; Gabhane, K. B.; Sakarkar, D. M.
2010-01-01
A simple, accurate and precise spectrophotometric method has been developed for simultaneous estimation of clopidogrel bisulphate and aspirin by employing first order derivative zero crossing method. The first order derivative absorption at 232.5 nm (zero cross point of aspirin) was used for clopidogrel bisulphate and 211.3 nm (zero cross point of clopidogrel bisulphate) for aspirin.Both the drugs obeyed linearity in the concentration range of 5.0 μg/ml to 25.0 μg/ml (correlation coefficient r2<1). No interference was found between both determined constituents and those of matrix. The method was validated statistically and recovery studies were carried out to confirm the accuracy of the method. PMID:21969765
Radar QPE for hydrological design: Intensity-Duration-Frequency curves
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2015-04-01
Intensity-duration-frequency (IDF) curves are widely used in flood risk management since they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. They are estimated analyzing the extreme values of rainfall records, usually basing on raingauge data. This point-based approach raises two issues: first, hydrological design applications generally need IDF information for the entire catchment rather than a point, second, the representativeness of point measurements decreases with the distance from measure location, especially in regions characterized by steep climatological gradients. Weather radar, providing high resolution distributed rainfall estimates over wide areas, has the potential to overcome these issues. Two objections usually restrain this approach: (i) the short length of data records and (ii) the reliability of quantitative precipitation estimation (QPE) of the extremes. This work explores the potential use of weather radar estimates for the identification of IDF curves by means of a long length radar archive and a combined physical- and quantitative- adjustment of radar estimates. Shacham weather radar, located in the eastern Mediterranean area (Tel Aviv, Israel), archives data since 1990 providing rainfall estimates for 23 years over a region characterized by strong climatological gradients. Radar QPE is obtained correcting the effects of pointing errors, ground echoes, beam blockage, attenuation and vertical variations of reflectivity. Quantitative accuracy is then ensured with a range-dependent bias adjustment technique and reliability of radar QPE is assessed by comparison with gauge measurements. IDF curves are derived from the radar data using the annual extremes method and compared with gauge-based curves. Results from 14 study cases will be presented focusing on the effects of record length and QPE accuracy, exploring the potential application of radar IDF curves for ungauged locations and providing insights on the use of radar QPE for hydrological design studies.
The effect of social interactions in the primary consumption life cycle of motion pictures
NASA Astrophysics Data System (ADS)
Hidalgo R, César A.; Castro, Alejandra; Rodriguez-Sickert, Carlos
2006-04-01
We develop a 'basic principles' model which accounts for the primary life cycle consumption of films as a social coordination problem in which information transmission is governed by word of mouth. We fit the analytical solution of such a model to aggregated consumption data from the film industry and derive a quantitative estimator of its quality based on the structure of the life cycle.
NASA Astrophysics Data System (ADS)
Pavlova, L. A.; Komarova, T. V.; Davidovich, Yurii A.; Rogozhin, S. V.
1981-04-01
The results of studies on the biochemistry of the sweet taste are briefly reviewed. The methods of synthesis of "aspartame" — a sweet dipeptide — are considered, its structural analogues are described, and quantitative estimates are made of the degree of sweetness relative to sucrose. Attention is concentrated mainly on problems of the relation between the structure of the substance and its taste in the series of aspartyl derivatives. The bibliography includes 118 references.
Bradbury, Steven P; Russom, Christine L; Ankley, Gerald T; Schultz, T Wayne; Walker, John D
2003-08-01
The use of quantitative structure-activity relationships (QSARs) in assessing potential toxic effects of organic chemicals on aquatic organisms continues to evolve as computational efficiency and toxicological understanding advance. With the ever-increasing production of new chemicals, and the need to optimize resources to assess thousands of existing chemicals in commerce, regulatory agencies have turned to QSARs as essential tools to help prioritize tiered risk assessments when empirical data are not available to evaluate toxicological effects. Progress in designing scientifically credible QSARs is intimately associated with the development of empirically derived databases of well-defined and quantified toxicity endpoints, which are based on a strategic evaluation of diverse sets of chemical structures, modes of toxic action, and species. This review provides a brief overview of four databases created for the purpose of developing QSARs for estimating toxicity of chemicals to aquatic organisms. The evolution of QSARs based initially on general chemical classification schemes, to models founded on modes of toxic action that range from nonspecific partitioning into hydrophobic cellular membranes to receptor-mediated mechanisms is summarized. Finally, an overview of expert systems that integrate chemical-specific mode of action classification and associated QSAR selection for estimating potential toxicological effects of organic chemicals is presented.
NASA Astrophysics Data System (ADS)
Brown, Robert Douglas
Several components of a system for quantitative application of climatic statistics to landscape planning and design (CLIMACS) have been developed. One component model (MICROSIM) estimated the microclimate at the top of a remote crop using physically-based models and inputs of weather station data. Temperatures at the top of unstressed, uniform crops on flat terrain within 1600 m of a recording weather station were estimated within 1.0 C 96% of the time for a corn crop and 92% of the time for a soybean crop. Crop top winds were estimated within 0.4 m/s 92% of the time for corn and 100% of the time for soybean. This is of sufficient accuracy for application in landscape planning and design models. A physically-based model (COMFA) was developed for the determination of outdoor human thermal comfort from microclimate inputs. Estimated versus measured comfort levels in a wide range of environments agreed with a correlation coefficient of r = 0.91. Using these components, the CLIMACS concept has been applied to a typical planning example. Microclimate data were generated from weather station information using MICROSIM, then input to COMFA and to a house energy consumption model called HOTCAN to derive quantitative climatic justification for design decisions.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.
NASA Astrophysics Data System (ADS)
Wood, W. T.; Runyan, T. E.; Palmsten, M.; Dale, J.; Crawford, C.
2016-12-01
Natural Gas (primarily methane) and gas hydrate accumulations require certain bio-geochemical, as well as physical conditions, some of which are poorly sampled and/or poorly understood. We exploit recent advances in the prediction of seafloor porosity and heat flux via machine learning techniques (e.g. Random forests and Bayesian networks) to predict the occurrence of gas and subsequently gas hydrate in marine sediments. The prediction (actually guided interpolation) of key parameters we use in this study is a K-nearest neighbor technique. KNN requires only minimal pre-processing of the data and predictors, and requires minimal run-time input so the results are almost entirely data-driven. Specifically we use new estimates of sedimentation rate and sediment type, along with recently derived compaction modeling to estimate profiles of porosity and age. We combined the compaction with seafloor heat flux to estimate temperature with depth and geologic age, which, with estimates of organic carbon, and models of methanogenesis yield limits on the production of methane. Results include geospatial predictions of gas (and gas hydrate) accumulations, with quantitative estimates of uncertainty. The Generic Earth Modeling System (GEMS) we have developed to derive the machine learning estimates is modular and easily updated with new algorithms or data.
NASA Astrophysics Data System (ADS)
Prat, O. P.; Nelson, B. R.
2014-10-01
We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets (bias-adjusted TMPA 3B42, near-real time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (± 6%). However, differences at the RFC are more important in particular for near-real time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near real time counterpart 3B42RT. However, large biases remained for 3B42 over the Western US for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in day-1) over the Northwest. Furthermore, the conditional analysis and the contingency analysis conducted illustrated the challenge of retrieving extreme precipitation from remote sensing estimates.
NASA Astrophysics Data System (ADS)
Zhong, L.; Ma, Y.; Ma, W.; Zou, M.; Hu, Y.
2016-12-01
Actual evapotranspiration (ETa) is an important component of the water cycle in the Tibetan Plateau. It is controlled by many hydrological and meteorological factors. Therefore, it is of great significance to estimate ETa accurately and continuously. It is also drawing much attention of scientific community to understand land surface parameters and land-atmosphere water exchange processes in small watershed-scale areas. Based on in-situ meteorological data in the Nagqu river basin and surrounding regions, the main meteorological factors affecting the evaporation process were quantitatively analyzed and the point-scale ETa estimation models in the study area were successfully built. On the other hand, multi-source satellite data (such as SPOT, MODIS, FY-2C) were used to derive the surface characteristics in the river basin. A time series processing technique was applied to remove cloud cover and reconstruct data series. Then improved land surface albedo, improved downward shortwave radiation flux and reconstructed normalized difference vegetation index (NDVI) were coupled into the topographical enhanced surface energy balance system to estimate ETa. The model-estimated results were compared with those ETa values determined by combinatory method. The results indicated that the model-estimated ETa agreed well with in-situ measurements with correlation coefficient, mean bias error and root mean square error of 0.836, 0.087 and 0.140 mm/h respectively.
Alomari, Ali Hamed; Wille, Marie-Luise; Langton, Christian M
2018-02-01
Conventional mechanical testing is the 'gold standard' for assessing the stiffness (N mm -1 ) and strength (MPa) of bone, although it is not applicable in-vivo since it is inherently invasive and destructive. The mechanical integrity of a bone is determined by its quantity and quality; being related primarily to bone density and structure respectively. Several non-destructive, non-invasive, in-vivo techniques have been developed and clinically implemented to estimate bone density, both areal (dual-energy X-ray absorptiometry (DXA)) and volumetric (quantitative computed tomography (QCT)). Quantitative ultrasound (QUS) parameters of velocity and attenuation are dependent upon both bone quantity and bone quality, although it has not been possible to date to transpose one particular QUS parameter into separate estimates of quantity and quality. It has recently been shown that ultrasound transit time spectroscopy (UTTS) may provide an accurate estimate of bone density and hence quantity. We hypothesised that UTTS also has the potential to provide an estimate of bone structure and hence quality. In this in-vitro study, 16 human femoral bone samples were tested utilising three techniques; UTTS, micro computed tomography (μCT), and mechanical testing. UTTS was utilised to estimate bone volume fraction (BV/TV) and two novel structural parameters, inter-quartile range of the derived transit time (UTTS-IQR) and the transit time of maximum proportion of sonic-rays (TTMP). μCT was utilised to derive BV/TV along with several bone structure parameters. A destructive mechanical test was utilised to measure the stiffness and strength (failure load) of the bone samples. BV/TV was calculated from the derived transit time spectrum (TTS); the correlation coefficient (R 2 ) with μCT-BV/TV was 0.885. For predicting mechanical stiffness and strength, BV/TV derived by both μCT and UTTS provided the strongest correlation with mechanical stiffness (R 2 =0.567 and 0.618 respectively) and mechanical strength (R 2 =0.747 and 0.736 respectively). When respective structural parameters were incorporated to BV/TV, multiple regression analysis indicated that none of the μCT histomorphometric parameters could improve the prediction of mechanical stiffness and strength, while for UTTS, adding TTMP to BV/TV increased the prediction of mechanical stiffness to R 2 =0.711 and strength to R 2 =0.827. It is therefore envisaged that UTTS may have the ability to estimate BV/TV along with providing an improved prediction of osteoporotic fracture risk, within routine clinical practice in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Stationary echo canceling in velocity estimation by time-domain cross-correlation.
Jensen, J A
1993-01-01
The application of stationary echo canceling to ultrasonic estimation of blood velocities using time-domain cross-correlation is investigated. Expressions are derived that show the influence from the echo canceler on the signals that enter the cross-correlation estimator. It is demonstrated that the filtration results in a velocity-dependent degradation of the signal-to-noise ratio. An analytic expression is given for the degradation for a realistic pulse. The probability of correct detection at low signal-to-noise ratios is influenced by signal-to-noise ratio, transducer bandwidth, center frequency, number of samples in the range gate, and number of A-lines employed in the estimation. Quantitative results calculated by a simple simulation program are given for the variation in probability from these parameters. An index reflecting the reliability of the estimate at hand can be calculated from the actual cross-correlation estimate by a simple formula and used in rejecting poor estimates or in displaying the reliability of the velocity estimated.
Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G
2016-12-01
The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.
NASA Astrophysics Data System (ADS)
Otsuka, Mioko; Hasegawa, Yasuhiro; Arisaka, Taichi; Shinozaki, Ryo; Morita, Hiroyuki
2017-11-01
The dimensionless figure of merit and its efficiency for the transient response of a Π-shaped thermoelectric module are estimated according to the theory of impedance spectroscopy. The effective dimensionless figure of merit is described as a function of the product of the characteristic time to reduce the temperature and the representative angular frequency of the module, which is expressed by the thermal diffusivity and the length of the elements used. The characteristic time required for achieving a higher dimensionless figure of merit and efficiency is derived quantitatively for the transient response using the properties of a commercial thermoelectric module.
Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.
OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING
Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.
2017-01-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369
Optimal experiment design for magnetic resonance fingerprinting.
Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L
2016-08-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.
A robust approach for ECG-based analysis of cardiopulmonary coupling.
Zheng, Jiewen; Wang, Weidong; Zhang, Zhengbo; Wu, Dalei; Wu, Hao; Peng, Chung-Kang
2016-07-01
Deriving respiratory signal from a surface electrocardiogram (ECG) measurement has advantage of simultaneously monitoring of cardiac and respiratory activities. ECG-based cardiopulmonary coupling (CPC) analysis estimated by heart period variability and ECG-derived respiration (EDR) shows promising applications in medical field. The aim of this paper is to provide a quantitative analysis of the ECG-based CPC, and further improve its performance. Two conventional strategies were tested to obtain EDR signal: R-S wave amplitude and area of the QRS complex. An adaptive filter was utilized to extract the common component of inter-beat interval (RRI) and EDR, generating enhanced versions of EDR signal. CPC is assessed through probing the nonlinear phase interactions between RRI series and respiratory signal. Respiratory oscillations presented in both RRI series and respiratory signals were extracted by ensemble empirical mode decomposition for coupling analysis via phase synchronization index. The results demonstrated that CPC estimated from conventional EDR series exhibits constant and proportional biases, while that estimated from enhanced EDR series is more reliable. Adaptive filtering can improve the accuracy of the ECG-based CPC estimation significantly and achieve robust CPC analysis. The improved ECG-based CPC estimation may provide additional prognostic information for both sleep medicine and autonomic function analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Casey, Kimberly Ann; Kaab, Andreas
2012-01-01
We demonstrate spectral estimation of supraglacial dust, debris, ash and tephra geochemical composition from glaciers and ice fields in Iceland, Nepal, New Zealand and Switzerland. Surface glacier material was collected and analyzed via X-ray fluorescence spectroscopy (XRF) and X-ray diffraction (XRD) for geochemical composition and mineralogy. In situ data was used as ground truth for comparison with satellite derived geochemical results. Supraglacial debris spectral response patterns and emissivity-derived silica weight percent are presented. Qualitative spectral response patterns agreed well with XRF elemental abundances. Quantitative emissivity estimates of supraglacial SiO2 in continental areas were 67% (Switzerland) and 68% (Nepal), while volcanic supraglacial SiO2 averages were 58% (Iceland) and 56% (New Zealand), yielding general agreement. Ablation season supraglacial temperature variation due to differing dust and debris type and coverage was also investigated, with surface debris temperatures ranging from 5.9 to 26.6 C in the study regions. Applications of the supraglacial geochemical reflective and emissive characterization methods include glacier areal extent mapping, debris source identification, glacier kinematics and glacier energy balance considerations.
Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R
2016-08-15
Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Disease quantification on PET/CT images without object delineation
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Wu, Caiyun; Fitzpatrick, Danielle; Winchell, Nicole; Schuster, Stephen J.; Torigian, Drew A.
2017-03-01
The derivation of quantitative information from images to make quantitative radiology (QR) clinically practical continues to face a major image analysis hurdle because of image segmentation challenges. This paper presents a novel approach to disease quantification (DQ) via positron emission tomography/computed tomography (PET/CT) images that explores how to decouple DQ methods from explicit dependence on object segmentation through the use of only object recognition results to quantify disease burden. The concept of an object-dependent disease map is introduced to express disease severity without performing explicit delineation and partial volume correction of either objects or lesions. The parameters of the disease map are estimated from a set of training image data sets. The idea is illustrated on 20 lung lesions and 20 liver lesions derived from 18F-2-fluoro-2-deoxy-D-glucose (FDG)-PET/CT scans of patients with various types of cancers and also on 20 NEMA PET/CT phantom data sets. Our preliminary results show that, on phantom data sets, "disease burden" can be estimated to within 2% of known absolute true activity. Notwithstanding the difficulty in establishing true quantification on patient PET images, our results achieve 8% deviation from "true" estimates, with slightly larger deviations for small and diffuse lesions where establishing ground truth becomes really questionable, and smaller deviations for larger lesions where ground truth set up becomes more reliable. We are currently exploring extensions of the approach to include fully automated body-wide DQ, extensions to just CT or magnetic resonance imaging (MRI) alone, to PET/CT performed with radiotracers other than FDG, and other functional forms of disease maps.
Eickhoff, Simon B.; Nichols, Thomas E.; Laird, Angela R.; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T.
2016-01-01
Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606
On A Problem Of Propagation Of Shock Waves Generated By Explosive Volcanic Eruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusev, V. A.; Sobissevitch, A. L.
2008-06-24
Interdisciplinary study of flows of matter and energy in geospheres has become one of the most significant advances in Earth sciences. It is carried out by means of direct quantitative estimations based on detailed analysis of geological and geophysical observations and experimental data. The actual contribution is the interdisciplinary study of nonlinear acoustics and physical volcanology dedicated to shock wave propagation in a viscous and inhomogeneous medium. The equations governing evolution of shock waves with an arbitrary initial profile and an arbitrary cross-section of a beam are obtained. For the case of low viscous medium, the asymptotic solution meant tomore » calculate a profile of a shock wave in an arbitrary point has been derived. The analytical solution of the problem on propagation of shock pulses from atmosphere into a two-phase fluid-saturated geophysical medium is analysed. Quantitative estimations were carried out with respect to experimental results obtained in the course of real explosive volcanic eruptions.« less
Campbell, Jerry L.; Clewell, Harvey J.; Zhou, Yi-Hui; Wright, Fred A.; Guyton, Kathryn Z.
2014-01-01
Background: Quantitative estimation of toxicokinetic variability in the human population is a persistent challenge in risk assessment of environmental chemicals. Traditionally, interindividual differences in the population are accounted for by default assumptions or, in rare cases, are based on human toxicokinetic data. Objectives: We evaluated the utility of genetically diverse mouse strains for estimating toxicokinetic population variability for risk assessment, using trichloroethylene (TCE) metabolism as a case study. Methods: We used data on oxidative and glutathione conjugation metabolism of TCE in 16 inbred and 1 hybrid mouse strains to calibrate and extend existing physiologically based pharmacokinetic (PBPK) models. We added one-compartment models for glutathione metabolites and a two-compartment model for dichloroacetic acid (DCA). We used a Bayesian population analysis of interstrain variability to quantify variability in TCE metabolism. Results: Concentration–time profiles for TCE metabolism to oxidative and glutathione conjugation metabolites varied across strains. Median predictions for the metabolic flux through oxidation were less variable (5-fold range) than that through glutathione conjugation (10-fold range). For oxidative metabolites, median predictions of trichloroacetic acid production were less variable (2-fold range) than DCA production (5-fold range), although the uncertainty bounds for DCA exceeded the predicted variability. Conclusions: Population PBPK modeling of genetically diverse mouse strains can provide useful quantitative estimates of toxicokinetic population variability. When extrapolated to lower doses more relevant to environmental exposures, mouse population-derived variability estimates for TCE metabolism closely matched population variability estimates previously derived from human toxicokinetic studies with TCE, highlighting the utility of mouse interstrain metabolism studies for addressing toxicokinetic variability. Citation: Chiu WA, Campbell JL Jr, Clewell HJ III, Zhou YH, Wright FA, Guyton KZ, Rusyn I. 2014. Physiologically based pharmacokinetic (PBPK) modeling of interstrain variability in trichloroethylene metabolism in the mouse. Environ Health Perspect 122:456–463; http://dx.doi.org/10.1289/ehp.1307623 PMID:24518055
Measuring iron in the brain using quantitative susceptibility mapping and X-ray fluorescence imaging
Zheng, Weili; Nichol, Helen; Liu, Saifeng; Cheng, Yu-Chung N.; Haacke, E. Mark
2013-01-01
Measuring iron content in the brain has important implications for a number of neurodegenerative diseases. Quantitative susceptibility mapping (QSM), derived from magnetic resonance images, has been used to measure total iron content in vivo and in post mortem brain. In this paper, we show how magnetic susceptibility from QSM correlates with total iron content measured by X-ray fluorescence (XRF) imaging and by inductively coupled plasma mass spectrometry (ICPMS). The relationship between susceptibility and ferritin iron was estimated at 1.10 ± 0.08 ppb susceptibility per μg iron/g wet tissue, similar to that of iron in fixed (frozen/thawed) cadaveric brain and previously published data from unfixed brains. We conclude that magnetic susceptibility can provide a direct and reliable quantitative measurement of iron content and that it can be used clinically at least in regions with high iron content. PMID:23591072
A mixed model for the relationship between climate and human cranial form.
Katz, David C; Grote, Mark N; Weaver, Timothy D
2016-08-01
We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
2001-09-30
data needed , and completing and reviewing the collection of information Send comments regarding this burden estimate or any other aspect of this...gradually from .0014 m2/m2 at the SWI to 0 at ~25 cm depth.] The stochastic model has also been used to calculate nonlocal irrigation ...2001), however, the stochastic simulation results do not decrease with depth as quickly as the chemically-derived irrigation coefficients
Maximum current density and beam brightness achievable by laser-driven electron sources
NASA Astrophysics Data System (ADS)
Filippetto, D.; Musumeci, P.; Zolotorev, M.; Stupakov, G.
2014-02-01
This paper discusses the extension to different electron beam aspect ratio of the Child-Langmuir law for the maximum achievable current density in electron guns. Using a simple model, we derive quantitative formulas in good agreement with simulation codes. The new scaling laws for the peak current density of temporally long and transversely narrow initial beam distributions can be used to estimate the maximum beam brightness and suggest new paths for injector optimization.
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2010-01-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
Nijkamp, M M; Bokkers, B G H; Bakker, M I; Ezendam, J; Delmaar, J E
2015-10-01
A quantitative risk assessment was performed to establish if consumers are at risk for being dermally sensitized by the fragrance geraniol. Aggregate dermal exposure to geraniol was estimated using the Probabilistic Aggregate Consumer Exposure Model, containing data on the use of personal care products and household cleaning agents. Consumer exposure to geraniol via personal care products appeared to be higher than via household cleaning agents. The hands were the body parts receiving the highest exposure to geraniol. Dermal sensitization studies were assessed to derive the point of departure needed for the estimation of the Acceptable Exposure Level (AEL). Two concentrations were derived, one based on human studies and the other from dose-response analysis of the available murine local lymph node assay data. The aggregate dermal exposure assessment resulted in body part specific median exposures up to 0.041 μg/cm(2) (highest exposure 102 μg/cm(2)) for hands. Comparing the exposure to the lowest AEL (55 μg/cm(2)), shows that a range of 0.02-0.86% of the population may have an aggregated exposure which exceeds the AEL. Furthermore, it is demonstrated that personal care products contribute more to the consumer's geraniol exposure compared to household cleaning agents. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Prat, O. P.; Nelson, B. R.
2015-04-01
We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over the contiguous United States (CONUS) for the period 2002-2012. This comparison effort includes satellite multi-sensor data sets (bias-adjusted TMPA 3B42, near-real-time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation data sets are compared with surface observations from the Global Historical Climatology Network-Daily (GHCN-D) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (±6%). However, differences at the RFC are more important in particular for near-real-time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near-real-time counterpart 3B42RT. However, large biases remained for 3B42 over the western USA for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in. day-1) over the Pacific Northwest. Furthermore, the conditional analysis and a contingency analysis conducted illustrated the challenge in retrieving extreme precipitation from remote sensing estimates.
The linearized multistage model and the future of quantitative risk assessment.
Crump, K S
1996-10-01
The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.
Analytical and experimental design and analysis of an optimal processor for image registration
NASA Technical Reports Server (NTRS)
Mcgillem, C. D. (Principal Investigator); Svedlow, M.; Anuta, P. E.
1976-01-01
The author has identified the following significant results. A quantitative measure of the registration processor accuracy in terms of the variance of the registration error was derived. With the appropriate assumptions, the variance was shown to be inversely proportional to the square of the effective bandwidth times the signal to noise ratio. The final expressions were presented to emphasize both the form and simplicity of their representation. In the situation where relative spatial distortions exist between images to be registered, expressions were derived for estimating the loss in output signal to noise ratio due to these spatial distortions. These results are in terms of a reduction factor.
Rong, Xing; Du, Yong; Frey, Eric C
2012-06-21
Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.
Spatio-temporal models of mental processes from fMRI.
Janoos, Firdaus; Machiraju, Raghu; Singh, Shantanu; Morocz, Istvan Ákos
2011-07-15
Understanding the highly complex, spatially distributed and temporally organized phenomena entailed by mental processes using functional MRI is an important research problem in cognitive and clinical neuroscience. Conventional analysis methods focus on the spatial dimension of the data discarding the information about brain function contained in the temporal dimension. This paper presents a fully spatio-temporal multivariate analysis method using a state-space model (SSM) for brain function that yields not only spatial maps of activity but also its temporal structure along with spatially varying estimates of the hemodynamic response. Efficient algorithms for estimating the parameters along with quantitative validations are given. A novel low-dimensional feature-space for representing the data, based on a formal definition of functional similarity, is derived. Quantitative validation of the model and the estimation algorithms is provided with a simulation study. Using a real fMRI study for mental arithmetic, the ability of this neurophysiologically inspired model to represent the spatio-temporal information corresponding to mental processes is demonstrated. Moreover, by comparing the models across multiple subjects, natural patterns in mental processes organized according to different mental abilities are revealed. Copyright © 2011 Elsevier Inc. All rights reserved.
Airborne radar and radiometer experiment for quantitative remote measurements of rain
NASA Technical Reports Server (NTRS)
Kozu, Toshiaki; Meneghini, Robert; Boncyk, Wayne; Wilheit, Thomas T.; Nakamura, Kenji
1989-01-01
An aircraft experiment has been conducted with a dual-frequency (10 GHz and 35 GHz) radar/radiometer system and an 18-GHz radiometer to test various rain-rate retrieval algorithms from space. In the experiment, which took place in the fall of 1988 at the NASA Wallops Flight Facility, VA, both stratiform and convective storms were observed. A ground-based radar and rain gauges were also used to obtain truth data. An external radar calibration is made with rain gauge data, thereby enabling quantitative reflectivity measurements. Comparisons between path attenuations derived from the surface return and from the radar reflectivity profile are made to test the feasibility of a technique to estimate the raindrop size distribution from simultaneous radar and path-attenuation measurements.
Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A
2017-12-19
As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.
Quantitative characterization of turbidity by radiative transfer based reflectance imaging
Tian, Peng; Chen, Cheng; Jin, Jiahong; Hong, Heng; Lu, Jun Q.; Hu, Xin-Hua
2018-01-01
A new and noncontact approach of multispectral reflectance imaging has been developed to inversely determine the absorption coefficient of μa, the scattering coefficient of μs and the anisotropy factor g of a turbid target from one measured reflectance image. The incident beam was profiled with a diffuse reflectance standard for deriving both measured and calculated reflectance images. A GPU implemented Monte Carlo code was developed to determine the parameters with a conjugate gradient descent algorithm and the existence of unique solutions was shown. We noninvasively determined embedded region thickness in heterogeneous targets and estimated in vivo optical parameters of nevi from 4 patients between 500 and 950nm for melanoma diagnosis to demonstrate the potentials of quantitative reflectance imaging. PMID:29760971
NASA Astrophysics Data System (ADS)
Jensen, Jens H.; Helpern, Joseph A.
2011-06-01
Hardware constraints typically require the use of extended gradient pulse durations for clinical applications of diffusion-weighted magnetic resonance imaging (DW-MRI), which can potentially influence the estimation of diffusion metrics. Prior studies have examined this effect for the apparent diffusion coefficient. This study employs a two-compartment exchange model in order to assess the gradient pulse duration sensitivity of the apparent diffusional kurtosis (ADK), a quantitative index of diffusional non-Gaussianity. An analytic expression is derived and numerically evaluated for parameter ranges relevant to DW-MRI of brain. It is found that the ADK differs from the true diffusional kurtosis by at most a few percent. This suggests that ADK estimates for brain may be robust with respect to changes in pulse gradient duration.
Recommended approaches in the application of ...
ABSTRACT:Only a fraction of chemicals in commerce have been fully assessed for their potential hazards to human health due to difficulties involved in conventional regulatory tests. It has recently been proposed that quantitative transcriptomic data can be used to determine benchmark dose (BMD) and estimate a point of departure (POD). Several studies have shown that transcriptional PODs correlate with PODs derived from analysis of pathological changes, but there is no consensus on how the genes that are used to derive a transcriptional POD should be selected. Because of very large number of unrelated genes in gene expression data, the process of selecting subsets of informative genes is a major challenge. We used published microarray data from studies on rats exposed orally to multiple doses of six chemicals for 5, 14, 28, and 90 days. We evaluated eight different approaches to select genes for POD derivation and compared them to three previously proposed approaches. The relationship between transcriptional BMDs derived using these 11 approaches were compared with PODs derived from apical data that might be used in a human health risk assessment. We found that transcriptional benchmark dose values for all 11 approaches were remarkably aligned with different apical PODs, while a subset of between 3 and 8 of the approaches met standard statistical criteria across the 5-, 14-, 28-, and 90-day time points and thus qualify as effective estimates of apical PODs. Our r
SYN-JEM: A Quantitative Job-Exposure Matrix for Five Lung Carcinogens.
Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans
2016-08-01
The use of measurement data in occupational exposure assessment allows more quantitative analyses of possible exposure-response relations. We describe a quantitative exposure assessment approach for five lung carcinogens (i.e. asbestos, chromium-VI, nickel, polycyclic aromatic hydrocarbons (by its proxy benzo(a)pyrene (BaP)) and respirable crystalline silica). A quantitative job-exposure matrix (JEM) was developed based on statistical modeling of large quantities of personal measurements. Empirical linear models were developed using personal occupational exposure measurements (n = 102306) from Europe and Canada, as well as auxiliary information like job (industry), year of sampling, region, an a priori exposure rating of each job (none, low, and high exposed), sampling and analytical methods, and sampling duration. The model outcomes were used to create a JEM with a quantitative estimate of the level of exposure by job, year, and region. Decreasing time trends were observed for all agents between the 1970s and 2009, ranging from -1.2% per year for personal BaP and nickel exposures to -10.7% for asbestos (in the time period before an asbestos ban was implemented). Regional differences in exposure concentrations (adjusted for measured jobs, years of measurement, and sampling method and duration) varied by agent, ranging from a factor 3.3 for chromium-VI up to a factor 10.5 for asbestos. We estimated time-, job-, and region-specific exposure levels for four (asbestos, chromium-VI, nickel, and RCS) out of five considered lung carcinogens. Through statistical modeling of large amounts of personal occupational exposure measurement data we were able to derive a quantitative JEM to be used in community-based studies. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Tao; Tsui, Benjamin M. W.; Li, Xin
Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less
Wianowska, Dorota; Typek, Rafał; Dawidowicz, Andrzej L
2015-09-01
The analytical procedures for determining plant constituents involve the application of sample preparation methods to fully isolate and/or pre-concentrate the analyzed substances. High-temperature liquid extraction is still applied most frequently for this purpose. The present paper shows that high-temperature extraction cannot be applied for the analysis of chlorogenic acids (CQAs) and their derivatives in plants as it causes the CQAs transformation leading to erroneous quantitative estimations of these compounds. Experiments performed on different plants (black elder, hawthorn, nettle, yerba maté, St John's wort and green coffee) demonstrate that the most appropriate method for the estimation of CQAs/CQAs derivatives is sea sand disruption method (SSDM) because it does not induce any transformation and/or degradation processes in the analyzed substances. Owing to the SSDM method application we found that the investigated plants, besides four main CQAs, contain sixteen CQAs derivatives, among them three quinic acids. The application of SSDM in plant analysis not only allows to establish a true concentration of individual CQAs in the examined plants but also to determine which chlorogenic acids derivatives are native plant components and what is their concentration level. What is even more important, the application of SSDM in plant analysis allows to eliminate errors that may arise or might have arisen in the study of chlorogenic acids and their derivatives in plant metabolism. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dalle Carbonare, S; Folli, F; Patrini, E; Giudici, P; Bellazzi, R
2013-01-01
The increasing demand of health care services and the complexity of health care delivery require Health Care Organizations (HCOs) to approach clinical risk management through proper methods and tools. An important aspect of risk management is to exploit the analysis of medical injuries compensation claims in order to reduce adverse events and, at the same time, to optimize the costs of health insurance policies. This work provides a probabilistic method to estimate the risk level of a HCO by computing quantitative risk indexes from medical injury compensation claims. Our method is based on the estimate of a loss probability distribution from compensation claims data through parametric and non-parametric modeling and Monte Carlo simulations. The loss distribution can be estimated both on the whole dataset and, thanks to the application of a Bayesian hierarchical model, on stratified data. The approach allows to quantitatively assessing the risk structure of the HCO by analyzing the loss distribution and deriving its expected value and percentiles. We applied the proposed method to 206 cases of injuries with compensation requests collected from 1999 to the first semester of 2007 by the HCO of Lodi, in the Northern part of Italy. We computed the risk indexes taking into account the different clinical departments and the different hospitals involved. The approach proved to be useful to understand the HCO risk structure in terms of frequency, severity, expected and unexpected loss related to adverse events.
Roy, Pierre-Marie; Than, Martin P.; Hernandez, Jackeline; Courtney, D. Mark; Jones, Alan E.; Penazola, Andrea; Pollack, Charles V.
2012-01-01
Background Clinical guidelines recommend risk stratification of patients with acute pulmonary embolism (PE). Active cancer increases risk of PE and worsens prognosis, but also causes incidental PE that may be discovered during cancer staging. No quantitative decision instrument has been derived specifically for patients with active cancer and PE. Methods Classification and regression technique was used to reduce 25 variables prospectively collected from 408 patients with AC and PE. Selected variables were transformed into a logistic regression model, termed POMPE-C, and compared with the pulmonary embolism severity index (PESI) score to predict the outcome variable of death within 30 days. Validation was performed in an independent sample of 182 patients with active cancer and PE. Results POMPE-C included eight predictors: body mass, heart rate >100, respiratory rate, SaO2%, respiratory distress, altered mental status, do not resuscitate status, and unilateral limb swelling. In the derivation set, the area under the ROC curve for POMPE-C was 0.84 (95% CI: 0.82-0.87), significantly greater than PESI (0.68, 0.60-0.76). In the validation sample, POMPE-C had an AUC of 0.86 (0.78-0.93). No patient with POMPE-C estimate ≤5% died within 30 days (0/50, 0-7%), whereas 10/13 (77%, 46-95%) with POMPE-C estimate >50% died within 30 days. Conclusion In patients with active cancer and PE, POMPE-C demonstrated good prognostic accuracy for 30 day mortality and better performance than PESI. If validated in a large sample, POMPE-C may provide a quantitative basis to decide treatment options for PE discovered during cancer staging and with advanced cancer. PMID:22475313
NASA Technical Reports Server (NTRS)
1979-01-01
Satellites provide an excellent platform from which to observe crops on the scale and frequency required to provide accurate crop production estimates on a worldwide basis. Multispectral imaging sensors aboard these platforms are capable of providing data from which to derive acreage and production estimates. The issue of sensor swath width was examined. The quantitative trade trade necessary to resolve the combined issue of sensor swath width, number of platforms, and their orbits was generated and are included. Problems with different swath width sensors were analyzed and an assessment of system trade-offs of swath width versus number of satellites was made for achieving Global Crop Production Forecasting.
Sources and concentrations of aldehydes and ketones in indoor environments in the UK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crump, D.R.; Gardiner, D.
1989-01-01
Individual aldehydes and ketones can be separated, identified and quantitatively estimated by trapping the 2,4-dinitrophenylhydrazine (DNPH) derivatives and analysis by HPLC. Appropriate methods and detection limits are reported. Many sources of formaldehyde have been identified by this means and some are found to emit other aldehydes and ketones. The application of this method to determine the concentration of these compounds in the atmospheres of buildings is described and the results compared with those obtained using chromotropic acid or MBTH.
Estimation of groundwater and nutrient fluxes to the Neuse River estuary, North Carolina
Spruill, T.B.; Bratton, J.F.
2008-01-01
A study was conducted between April 2004 and September 2005 to estimate groundwater and nutrient discharge to the Neuse River estuary in North Carolina. The largest groundwater fluxes were observed to occur generally within 20 m of the shoreline. Groundwater flux estimates based on seepage meter measurements ranged from 2.86??108 to 4.33??108 m3 annually and are comparable to estimates made using radon, a simple water-budget method, and estimates derived by using Darcy's Law and previously published general aquifer characteristics of the area. The lower groundwater flux estimate (equal to about 9 m3 s-1), which assumed the narrowest groundwater discharge zone (20 m) of three zone widths selected for an area west of New Bern, North Carolina, most closely agrees with groundwater flux estimates made using radon (3-9 m3 s-1) and Darcy's Law (about 9 m3 s-1). A groundwater flux of 9 m 3 s-1 is about 40% of the surface-water flow to the Neuse River estuary between Streets Ferry and the mouth of the estuary and about 7% of the surface-water inflow from areas upstream. Estimates of annual nitrogen (333 tonnes) and phosphorus (66 tonnes) fluxes from groundwater to the estuary, based on this analysis, are less than 6% of the nitrogen and phosphorus inputs derived from all sources (excluding oceanic inputs), and approximately 8% of the nitrogen and 17% of the phosphorus annual inputs from surface-water inflow to the Neuse River estuary assuming a mean annual precipitation of 1.27 m. We provide quantitative evidence, derived from three methods, that the contribution of water and nutrients from groundwater discharge to the Neuse River estuary is relatively minor, particularly compared with upstream sources of water and nutrients and with bottom sediment sources of nutrients. Locally high groundwater discharges do occur, however, and could help explain the occurrence of localized phytoplankton blooms, submerged aquatic vegetation, or fish kills.
Estimation of effective soil hydraulic properties at field scale via ground albedo neutron sensing
NASA Astrophysics Data System (ADS)
Rivera Villarreyes, C. A.; Baroni, G.; Oswald, S. E.
2012-04-01
Upscaling of soil hydraulic parameters is a big challenge in hydrological research, especially in model applications of water and solute transport processes. In this contest, numerous attempts have been made to optimize soil hydraulic properties using observations of state variables such as soil moisture. However, in most of the cases the observations are limited at the point-scale and then transferred to the model scale. In this way inherent small-scale soil heterogeneities and non-linearity of dominate processes introduce sources of error that can produce significant misinterpretation of hydrological scenarios and unrealistic predictions. On the other hand, remote-sensed soil moisture over large areas is also a new promising approach to derive effective soil hydraulic properties over its observation footprint, but it is still limited to the soil surface. In this study we present a new methodology to derive soil moisture at the intermediate scale between point-scale observations and estimations at the remote-sensed scale. The data are then used for the estimation of effective soil hydraulic parameters. In particular, ground albedo neutron sensing (GANS) was used to derive non-invasive soil water content in a footprint of ca. 600 m diameter and a depth of few decimeters. This approach is based on the crucial role of hydrogen compared to other landscape materials as neutron moderator. As natural neutron measured aboveground depends on soil water content, the vertical footprint of the GANS method, i.e. its penetration depth, does also. Firstly, this study was designed to evaluate the dynamics of GANS vertical footprint and derive a mathematical model for its prediction. To test GANS-soil moisture and its penetration depth, it was accompanied by other soil moisture measurements (FDR) located at 5, 20 and 40 cm depths over the GANS horizontal footprint in a sunflower field (Brandenburg, Germany). Secondly, a HYDRUS-1D model was set up with monitored values of crop height and meteorological variables as input during a four-month period. Parameter estimation (PEST) software was coupled to HYDRUS-1D in order to calibrate soil hydraulic properties based on soil water content data. Thirdly, effective soil hydraulic properties were derived from GANS-soil moisture. Our observations show the potential of GANS to compensate the lack of information at the intermediate scale, soil water content estimation and effective soil properties. Despite measurement volumes, GANS-derived soil water content compared quantitatively to FDRs at several depths. For one-hour estimations, root mean square error was estimated as 0.019, 0.029 and 0.036 m3/m3 for 5 cm, 20 cm and 40 cm depths, respectively. In the context of soil hydraulic properties, this first application of GANS method succeed and its estimations were comparable to those derived by other approaches.
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Quantitative analysis of anti-inflammatory drugs using FTIR-ATR spectrometry
NASA Astrophysics Data System (ADS)
Hassib, Sonia T.; Hassan, Ghaneya S.; El-Zaher, Asmaa A.; Fouad, Marwa A.; Taha, Enas A.
2017-11-01
Four simple, accurate, sensitive and economic Attenuated Total Reflectance-Fourier Transform Infrared Spectroscopic (ATR-FTIR) methods have been developed for the quantitative estimation of some non-steroidal anti-inflammatory drugs. The first method involves the determination of Etodolac by direct measurement of the absorbance at 1716 cm- 1. In the second method, the second derivative of the IR spectra of Tolfenamic acid and its reported degradation product (2-chlorobenzoic acid) was used and the amplitudes were measured at 1084.27 cm- 1 and 1056.02 cm- 1 for Tolfenamic acid and 2-chlorobenzoic acid, respectively. The third method used the first derivative of the IR spectra of Bumadizone and its reported degradation product, N,N-diphenylhydrazine and the amplitudes were measured at 2874.98 cm- 1 and 2160.32 cm- 1 for Bumadizone and N,N-diphenylhydrazine, respectively. The fourth method depends on measuring the amplitude of Diacerein at 1059.18 cm- 1 and of rhein, its reported degradation product, at 1079.32 cm- 1 in their first derivative spectra. The four methods were successfully applied on the pharmaceutical formulations by extracting the active constituent in chloroform and the extract was directly measured in liquid phase mode using a specific cell. Moreover, validation of these methods was carried out following International Conference of Harmonisation (ICH) guidelines.
The physical and biological basis of quantitative parameters derived from diffusion MRI
2012-01-01
Diffusion magnetic resonance imaging is a quantitative imaging technique that measures the underlying molecular diffusion of protons. Diffusion-weighted imaging (DWI) quantifies the apparent diffusion coefficient (ADC) which was first used to detect early ischemic stroke. However this does not take account of the directional dependence of diffusion seen in biological systems (anisotropy). Diffusion tensor imaging (DTI) provides a mathematical model of diffusion anisotropy and is widely used. Parameters, including fractional anisotropy (FA), mean diffusivity (MD), parallel and perpendicular diffusivity can be derived to provide sensitive, but non-specific, measures of altered tissue structure. They are typically assessed in clinical studies by voxel-based or region-of-interest based analyses. The increasing recognition of the limitations of the diffusion tensor model has led to more complex multi-compartment models such as CHARMED, AxCaliber or NODDI being developed to estimate microstructural parameters including axonal diameter, axonal density and fiber orientations. However these are not yet in routine clinical use due to lengthy acquisition times. In this review, I discuss how molecular diffusion may be measured using diffusion MRI, the biological and physical bases for the parameters derived from DWI and DTI, how these are used in clinical studies and the prospect of more complex tissue models providing helpful micro-structural information. PMID:23289085
Miller, M J; Maher, V M; McCormick, J J
1992-11-01
Quantitative two-dimensional gel electrophoresis was used to compare the cellular protein patterns of a normal foreskin-derived human fibroblasts cell line (LG1) and three immortal derivatives of LG1. One derivative, designated MSU-1.1 VO, was selected for its ability to grow in the absence of serum and is non-tumorigenic in athymic mice. The other two strains were selected for focus-formation following transfection with either Ha-ras or N-ras oncogenes and form high grade malignant tumors. Correspondence and cluster analysis provided a nonbiased estimate of the relative similarity of the different two-dimensional patterns. These techniques separated the gel patterns into three distinct classes: LG1, MSU-1.1 VO, and the ras transformed cell strains. The MSU-1.1 VO cells were more closely related to the parental LG1 than to the ras-transformed cells. The differences between the three classes were primarily quantitative in nature: 16% of the spots demonstrated statistically significant changes (P < 0.01, T test, mean ratio of intensity > 2) in the rate of incorporation of radioactive amino acids. The patterns from the two ras-transformed cell strains were similar, and variations in the expression of proteins that occurred between the separate experiments obscured consistent differences between the Ha-ras and N-ras transformed cells. However, while only 9 out of 758 spots were classified as different (1%), correspondence analysis could consistently separate the two ras transformants. One of these spots was five times more intense in the Ha-ras transformed cells than the N-ras.(ABSTRACT TRUNCATED AT 250 WORDS)
HEALTH AND ENVIRONMENTAL EFFECTS DOCUMENT ...
Health and Environmental Effects Documents (HEEDS) are prepared for the Office of Solid Waste and Emergency Response (OSWER). This document series is intended to support listings under the Resource Conservation and Recovery Act (RCRA) as well as to provide health-related limits and goals for emergency and remedial actions under the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA). Both published literature and information obtained from Agency Program Office files are evaluated as they pertain to potential human health, aquatic life and environmental effects of hazardous waste constituents. Several quantitative estimates are presented provided sufficient data are available. For systemic toxicants, these include Reference Doses (RfDs) for chronic and subchronic exposures for both the inhalation and oral exposures. In the case of suspected carcinogens, RfDs may not be estimated. Instead, a carcinogenic potency factor, or q1*, is provided. These potency estimates are derived for both oral and inhalation exposures where possible. In addition, unit risk estimates for air and drinking water are presented based on inhalation and oral data, respectively. Reportable quantities (RQs) based on both chronic toxicity and carcinogenicity are derived. The RQ is used to determine the quantity of a hazardous substance for which notification is required in the event of a release as specified under CERCLA.
Simchick, Gregory; Liu, Zhi; Nagy, Tamas; Xiong, May; Zhao, Qun
2018-03-25
To assess the feasibility of quantifying liver iron concentration (LIC) using R2* and quantitative susceptibility mapping (QSM) at a high field strength of 7 Tesla (T). Five different concentrations of Fe-dextran were injected into 12 mice to produce various degrees of liver iron overload. After mice were sacrificed, blood and liver samples were harvested. Ferritin enzyme-linked immunosorbent assay (ELISA) and inductively coupled plasma mass spectrometry were performed to quantify serum ferritin concentration and LIC. Multiecho gradient echo MRI was conducted to estimate R2* and the magnetic susceptibility of each liver sample through complex nonlinear least squares fitting and a morphology enabled dipole inversion method, respectively. Average estimates of serum ferritin concentration, LIC, R2*, and susceptibility all show good linear correlations with injected Fe-dextran concentration; however, the standard deviations in the estimates of R2* and susceptibility increase with injected Fe-dextran concentration. Both R2* and susceptibility measurements also show good linear correlations with LIC (R 2 = 0.78 and R 2 = 0.91, respectively), and a susceptibility-to-LIC conversion factor of 0.829 ppm/(mg/g wet) is derived. The feasibility of quantifying LIC using MR-based R2* and QSM at a high field strength of 7T is demonstrated. Susceptibility quantification, which is an intrinsic property of tissues and benefits from being field-strength independent, is more robust than R2* quantification in this ex vivo study. A susceptibility-to-LIC conversion factor is presented that agrees relatively well with previously published QSM derived results obtained at 1.5T and 3T. © 2018 International Society for Magnetic Resonance in Medicine.
Unraveling spurious properties of interaction networks with tailored random networks.
Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus
2011-01-01
We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures--known for their complex spatial and temporal dynamics--we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.
Unraveling Spurious Properties of Interaction Networks with Tailored Random Networks
Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus
2011-01-01
We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures – known for their complex spatial and temporal dynamics – we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis. PMID:21850239
Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N
2016-05-01
An increasing number of studies have aimed to compare diffusion tensor imaging (DTI)-related parameters [e.g., mean diffusivity (MD), fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD)] to complementary new indexes [e.g., mean kurtosis (MK)/radial kurtosis (RK)/axial kurtosis (AK)] derived through diffusion kurtosis imaging (DKI) in terms of their discriminative potential about tissue disease-related microstructural alterations. Given that the DTI and DKI models provide conceptually and quantitatively different estimates of the diffusion tensor, which can also depend on fitting routine, the aim of this study was to investigate model- and algorithm-dependent differences in MD/FA/RD/AD and anisotropy mode (MO) estimates in diffusion-weighted imaging of human brain white matter. The authors employed (a) data collected from 33 healthy subjects (20-59 yr, F: 15, M: 18) within the Human Connectome Project (HCP) on a customized 3 T scanner, and (b) data from 34 healthy subjects (26-61 yr, F: 5, M: 29) acquired on a clinical 3 T scanner. The DTI model was fitted to b-value =0 and b-value =1000 s/mm(2) data while the DKI model was fitted to data comprising b-value =0, 1000 and 3000/2500 s/mm(2) [for dataset (a)/(b), respectively] through nonlinear and weighted linear least squares algorithms. In addition to MK/RK/AK maps, MD/FA/MO/RD/AD maps were estimated from both models and both algorithms. Using tract-based spatial statistics, the authors tested the null hypothesis of zero difference between the two MD/FA/MO/RD/AD estimates in brain white matter for both datasets and both algorithms. DKI-derived MD/FA/RD/AD and MO estimates were significantly higher and lower, respectively, than corresponding DTI-derived estimates. All voxelwise differences extended over most of the white matter skeleton. Fractional differences between the two estimates [(DKI - DTI)/DTI] of most invariants were seen to vary with the invariant value itself as well as with MK/RK/AK values, indicating substantial anatomical variability of these discrepancies. In the HCP dataset, the median voxelwise percentage differences across the whole white matter skeleton were (nonlinear least squares algorithm) 14.5% (8.2%-23.1%) for MD, 4.3% (1.4%-17.3%) for FA, -5.2% (-48.7% to -0.8%) for MO, 12.5% (6.4%-21.2%) for RD, and 16.1% (9.9%-25.6%) for AD (all ranges computed as 0.01 and 0.99 quantiles). All differences/trends were consistent between the discovery (HCP) and replication (local) datasets and between estimation algorithms. However, the relationships between such trends, estimated diffusion tensor invariants, and kurtosis estimates were impacted by the choice of fitting routine. Model-dependent differences in the estimation of conventional indexes of MD/FA/MO/RD/AD can be well beyond commonly seen disease-related alterations. While estimating diffusion tensor-derived indexes using the DKI model may be advantageous in terms of mitigating b-value dependence of diffusivity estimates, such estimates should not be referred to as conventional DTI-derived indexes in order to avoid confusion in interpretation as well as multicenter comparisons. In order to assess the potential and advantages of DKI with respect to DTI as well as to standardize diffusion-weighted imaging methods between centers, both conventional DTI-derived indexes and diffusion tensor invariants derived by fitting the non-Gaussian DKI model should be separately estimated and analyzed using the same combination of fitting routines.
Xie, Weilong; Yu, Kangfu; Pauls, K Peter; Navabi, Alireza
2012-04-01
The effectiveness of image analysis (IA) compared with an ordinal visual scale, for quantitative measurement of disease severity, its application in quantitative genetic studies, and its effect on the estimates of genetic parameters were investigated. Studies were performed using eight backcross-derived families of common bean (Phaseolus vulgaris) (n = 172) segregating for the molecular marker SU91, known to be associated with a quantitative trait locus (QTL) for resistance to common bacterial blight (CBB), caused by Xanthomonas campestris pv. phaseoli and X. fuscans subsp. fuscans. Even though both IA and visual assessments were highly repeatable, IA was more sensitive in detecting quantitative differences between bean genotypes. The CBB phenotypic difference between the two SU91 genotypic groups was consistently more than fivefold for IA assessments but generally only two- to threefold for visual assessments. Results suggest that the visual assessment results in overestimation of the effect of QTL in genetic studies. This may have been caused by lack of additivity and uneven intervals of the visual scale. Although visual assessment of disease severity is a useful tool for general selection in breeding programs, assessments using IA may be more suitable for phenotypic evaluations in quantitative genetic studies involving CBB resistance as well as other foliar diseases.
Development of an SRM method for absolute quantitation of MYDGF/C19orf10 protein.
Dwivedi, Ravi C; Krokhin, Oleg V; El-Gabalawy, Hani S; Wilkins, John A
2016-06-01
To develop a MS-based selected reaction monitoring (SRM) assay for quantitation of myeloid-derived growth factor (MYDGF) formerly chromosome 19 open reading frame (C19orf10). Candidate reporter peptides were identified in digests of recombinant MYDGF. Isotopically labeled forms of these reporter peptides were employed as internal standards for assay development. Two reference peptides were selected SYLYFQTFFK and GAEIEYAMAYSK with respective LOQ of 42 and 380 attomole per injection. Application of the assay to human serum and synovial fluid determined that the assay sensitivity was reduced and quantitation was not achievable. However, the partial depletion of albumin and immunoglobulin from synovial fluids provided estimates of 300-650 femtomoles per injection (0.7-1.6 nanomolar (nM) fluid concentrations) in three of the six samples analyzed. A validated sensitive assay for the quantitation of MYDGF in biological fluids was developed. However, the endogenous levels of MYDGF in such fluids are at or below the current levels of quantitation. The levels of MYDGF are lower than those previously reported using an ELISA. The current results suggest that additional steps may be required to remove high abundance proteins or to enrich MYDGF for SRM-based quantitation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Blood flow estimation in gastroscopic true-color images
NASA Astrophysics Data System (ADS)
Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans
1995-05-01
The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.
O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A
2010-03-01
Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.
NASA Astrophysics Data System (ADS)
Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.
2015-12-01
The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.
Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S
2017-10-01
In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.
Porter, K.A.; Jaiswal, K.S.; Wald, D.J.; Greene, M.; Comartin, Craig
2008-01-01
The U.S. Geological Survey’s Prompt Assessment of Global Earthquake’s Response (PAGER) Project and the Earthquake Engineering Research Institute’s World Housing Encyclopedia (WHE) are creating a global database of building stocks and their earthquake vulnerability. The WHE already represents a growing, community-developed public database of global housing and its detailed structural characteristics. It currently contains more than 135 reports on particular housing types in 40 countries. The WHE-PAGER effort extends the WHE in several ways: (1) by addressing non-residential construction; (2) by quantifying the prevalence of each building type in both rural and urban areas; (3) by addressing day and night occupancy patterns, (4) by adding quantitative vulnerability estimates from judgment or statistical observation; and (5) by analytically deriving alternative vulnerability estimates using in part laboratory testing.
Koloušková, Pavla; Stone, James D.
2017-01-01
Accurate gene expression measurements are essential in studies of both crop and wild plants. Reverse transcription quantitative real-time PCR (RT-qPCR) has become a preferred tool for gene expression estimation. A selection of suitable reference genes for the normalization of transcript levels is an essential prerequisite of accurate RT-qPCR results. We evaluated the expression stability of eight candidate reference genes across roots, leaves, flower buds and pollen of Silene vulgaris (bladder campion), a model plant for the study of gynodioecy. As random priming of cDNA is recommended for the study of organellar transcripts and poly(A) selection is indicated for nuclear transcripts, we estimated gene expression with both random-primed and oligo(dT)-primed cDNA. Accordingly, we determined reference genes that perform well with oligo(dT)- and random-primed cDNA, making it possible to estimate levels of nucleus-derived transcripts in the same cDNA samples as used for organellar transcripts, a key benefit in studies of cyto-nuclear interactions. Gene expression variance was estimated by RefFinder, which integrates four different analytical tools. The SvACT and SvGAPDH genes were the most stable candidates across various organs of S. vulgaris, regardless of whether pollen was included or not. PMID:28817728
Chieng, Norman; Mizuno, Masayasu; Pikal, Michael
2013-01-01
The purposes of this study are to characterize the relaxation dynamics in complex freeze dried formulations and to investigate the quantitative relationship between the structural relaxation time as measured by thermal activity monitor (TAM) and that estimated from the width of the glass transition temperature (ΔTg). The latter method has advantages over TAM because it is simple and quick. As part of this objective, we evaluate the accuracy in estimating relaxation time data at higher temperatures (50°C and 60°C) from TAM data at lower temperature (40°C) and glass transition region width (ΔTg) data obtained by differential scanning calorimetry. Formulations studied here were hydroxyethyl starch (HES)-disaccharide, HES-polyol and HES-disaccharide-polyol at various ratios. We also re-examine, using TAM derived relaxation times, the correlation between protein stability (human growth hormone, hGH) and relaxation times explored in a previous report, which employed relaxation time data obtained from ΔTg. Results show that most of the freeze dried formulations exist in single amorphous phase, and structural relaxation times were successfully measured for these systems. We find a reasonably good correlation between TAM measured relaxation times and corresponding data obtained from estimates based on ΔTg, but the agreement is only qualitative. The comparison plot showed that TAM data is directly proportional to the 1/3 power of ΔTg data, after correcting for an offset. Nevertheless, the correlation between hGH stability and relaxation time remained qualitatively the same as found with using ΔTg derived relaxation data, and it was found that the modest extrapolation of TAM data to higher temperatures using ΔTg method and TAM data at 40°C resulted in quantitative agreement with TAM measurements made at 50 °C and 60 °C, provided the TAM experiment temperature is well below the Tg of the sample. PMID:23608636
Estimating skin sensitization potency from a single dose LLNA.
Roberts, David W
2015-04-01
Skin sensitization is an important aspect of safety assessment. The mouse local lymph node assay (LLNA) developed in the 1990 s is an in vivo test used for skin sensitization hazard identification and characterization. More recently a reduced version of the LLNA (rLLNA) has been developed as a means of identifying, but not quantifying, sensitization hazard. The work presented here is aimed at enabling rLLNA data to be used to give quantitative potency information that can be used, inter alia, in modeling and read-across approaches to non-animal based potency estimation. A probit function has been derived enabling estimation of EC3 from a single dose. This has led to development of a modified version of the rLLNA, whereby as a general principle the SI value at 10%, or at a lower concentration if 10% is not testable, is used to calculate the EC3. This version of the rLLNA has been evaluated against a selection of chemicals for which full LLNA data are available, and has been shown to give EC3 values in good agreement with those derived from the full LLNA. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, H. W.; Yi, H. Y.; Mishnaevsky, L.; Wang, R.; Duan, Z. Q.; Chen, Q.
2017-05-01
A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped GFRP composites at various stress level. A negative exponent function based on structural changes is introduced to describe the damage evolution of material properties in the process of creep test. Accordingly, a new creep constitutive equation, referred to fractional derivative Maxwell model, is suggested to characterize the time-dependent behavior of GFRP composites by replacing Newtonian dashpot with the Abel dashpot in the classical Maxwell model. The analytic solution for the fractional derivative Maxwell model is given and the relative parameters are determined. The results estimated by the fractional derivative Maxwell model proposed in the paper are in a good agreement with the experimental data. It is shown that the new creep constitutive model proposed in the paper needs few parameters to represent various time-dependent behaviors.
NASA Astrophysics Data System (ADS)
Ishihara, Mariko; Sakagami, Hiroshi; Kawase, Masami; Motohashi, Noboru
The relationship between the cytotoxicity of N-heterocycles (13 4-trifluoromethylimidazole, 15 phenoxazine and 12 5-trifluoromethyloxazole derivatives), O-heterocycles (11 3-formylchromone and 20 coumarin derivatives) and seven vitamin K2 derivatives against eight tumor cell lines (HSC-2, HSC-3, HSC-4, T98G, HSG, HepG2, HL-60, MT-4) and a maximum of 15 chemical descriptors was investigated using CAChe Worksystem 4.9 project reader. After determination of the conformation of these compounds and approximation to the molecular form present in vivo (biomimetic) by CONFLEX5, the most stable structure was determined by CAChe Worksystem 4.9 MOPAC (PM3). The present study demonstrates the best relationship between the cytotoxic activity and molecular shape or molecular weight of these compounds. Their biological activities can be estimated by hardness and softness, and by using η-χ activity diagrams.
Hyshka, Elaine; Karekezi, Kamagaju; Tan, Benjamin; Slater, Linda G; Jahrig, Jesse; Wild, T Cameron
2017-03-20
A growing body of research assesses population need for substance use services. However, the extent to which survey research incorporates expert versus consumer perspectives on service need is unknown. We conducted a large, international review to (1) describe extant research on population need for substance use services, and the extent to which it incorporates expert and consumer perspectives on service need, (2) critically assess methodological and measurement approaches used to study consumer-defined need, and (3) examine the potential for existing research that prioritizes consumer perspectives to inform substance use service system planning. Systematic searches of seven databases identified 1930 peer-reviewed articles addressing population need for substance use services between January 1980 and May 2015. Empirical studies (n = 1887) were categorized according to source(s) of data used to derive population estimates of service need (administrative records, biological samples, qualitative data, and/or quantitative surveys). Quantitative survey studies (n = 1594) were categorized as to whether service need was assessed from an expert and/or consumer perspective; studies employing consumer-defined need measures (n = 217) received further in-depth quantitative coding to describe study designs and measurement strategies. Almost all survey studies (96%; n = 1534) used diagnostically-oriented measures derived from an expert perspective to assess service need. Of the small number (14%, n = 217) of survey studies that assessed consumer's perspectives, most (77%) measured perceived need for generic services (i.e. 'treatment'), with fewer (42%) examining self-assessed barriers to service use, or informal help-seeking from family and friends (10%). Unstandardized measures were commonly used, and very little research was longitudinal or tested hypotheses. Only one study used a consumer-defined need measure to estimate required service system capacity. Rhetorical calls for including consumer perspectives in substance use service system planning are belied by the empirical literature, which is dominated by expert-driven approaches to measuring population need. Studies addressing consumer-defined need for substance use services are conceptually underdeveloped, and exhibit methodological and measurement weaknesses. Further scholarship is needed to integrate multidisciplinary perspectives in this literature, and fully realize the promise of incorporating consumer perspectives into substance use service system planning.
McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A
2016-08-01
Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.
Quantifying heterogeneity in human tumours using MRI and PET.
Asselin, Marie-Claude; O'Connor, James P B; Boellaard, Ronald; Thacker, Neil A; Jackson, Alan
2012-03-01
Most tumours, even those of the same histological type and grade, demonstrate considerable biological heterogeneity. Variations in genomic subtype, growth factor expression and local microenvironmental factors can result in regional variations within individual tumours. For example, localised variations in tumour cell proliferation, cell death, metabolic activity and vascular structure will be accompanied by variations in oxygenation status, pH and drug delivery that may directly affect therapeutic response. Documenting and quantifying regional heterogeneity within the tumour requires histological or imaging techniques. There is increasing evidence that quantitative imaging biomarkers can be used in vivo to provide important, reproducible and repeatable estimates of tumoural heterogeneity. In this article we review the imaging methods available to provide appropriate biomarkers of tumour structure and function. We also discuss the significant technical issues involved in the quantitative estimation of heterogeneity and the range of descriptive metrics that can be derived. Finally, we have reviewed the existing clinical evidence that heterogeneity metrics provide additional useful information in drug discovery and development and in clinical practice. Copyright © 2012 Elsevier Ltd. All rights reserved.
Quantitative ESD Guidelines for Charged Spacecraft Derived from the Physics of Discharges
NASA Technical Reports Server (NTRS)
Frederickson, A. R.
1992-01-01
Quantitative guidelines are proposed for Electrostatic Discharge (ESD) pulse shape on charged spacecraft. The guidelines are based on existing ground test data, and on a physical description of the pulsed discharge process. The guidelines are designed to predict pulse shape for surface charging and internal charging on a wide variety of spacecraft structures. The pulses depend on the area of the sample, its capacitance to ground, and the strength of the electric field in the vacuum adjacent to the charged surface. By knowing the pulse shape, current vs. time, one can determine if nearby circuits are threatened by the pulse. The quantitative guidelines might be used to estimate the level of threat to an existing spacecraft, or to redesign a spacecraft to reduce its pulses to a known safe level. The experiments which provide the data and the physics that allow one to interpret the data will be discussed, culminating in examples of how to predict pulse shape/size. This method has been used, but not confirmed, on several spacecraft.
Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images
Frey, Eric C.; Humm, John L.; Ljungberg, Michael
2012-01-01
The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429
Connor, Kevin; Magee, Brian
2014-10-01
This paper presents a risk assessment of exposure to metal residues in laundered shop towels by workers. The concentrations of 27 metals measured in a synthetic sweat leachate were used to estimate the releasable quantity of metals which could be transferred to workers' skin. Worker exposure was evaluated quantitatively with an exposure model that focused on towel-to-hand transfer and subsequent hand-to-food or -mouth transfers. The exposure model was based on conservative, but reasonable assumptions regarding towel use and default exposure factor values from the published literature or regulatory guidance. Transfer coefficients were derived from studies representative of the exposures to towel users. Contact frequencies were based on assumed high-end use of shop towels, but constrained by a theoretical maximum dermal loading. The risk estimates for workers developed for all metals were below applicable regulatory risk benchmarks. The risk assessment for lead utilized the Adult Lead Model and concluded that predicted lead intakes do not constitute a significant health hazard based on potential worker exposures. Uncertainties are discussed in relation to the overall confidence in the exposure estimates developed for each exposure pathway and the likelihood that the exposure model is under- or overestimating worker exposures and risk. Copyright © 2014 Elsevier Inc. All rights reserved.
Contrast detection in fluid-saturated media with magnetic resonance poroelastography
Perriñez, Phillip R.; Pattison, Adam J.; Kennedy, Francis E.; Weaver, John B.; Paulsen, Keith D.
2010-01-01
Purpose: Recent interest in the poroelastic behavior of tissues has led to the development of magnetic resonance poroelastography (MRPE) as an alternative to single-phase MR elastographic image reconstruction. In addition to the elastic parameters (i.e., Lamé’s constants) commonly associated with magnetic resonance elastography (MRE), MRPE enables estimation of the time-harmonic pore-pressure field induced by external mechanical vibration. Methods: This study presents numerical simulations that demonstrate the sensitivity of the computed displacement and pore-pressure fields to a priori estimates of the experimentally derived model parameters. In addition, experimental data collected in three poroelastic phantoms are used to assess the quantitative accuracy of MR poroelastographic imaging through comparisons with both quasistatic and dynamic mechanical tests. Results: The results indicate hydraulic conductivity to be the dominant parameter influencing the deformation behavior of poroelastic media under conditions applied during MRE. MRPE estimation of the matrix shear modulus was bracketed by the values determined from independent quasistatic and dynamic mechanical measurements as expected, whereas the contrast ratios for embedded inclusions were quantitatively similar (10%–15% difference between the reconstructed images and the mechanical tests). Conclusions: The findings suggest that the addition of hydraulic conductivity and a viscoelastic solid component as parameters in the reconstruction may be warranted. PMID:20831058
Takabatake, Reona; Akiyama, Hiroshi; Sakata, Kozue; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Teshima, Reiko; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi
2011-01-01
A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event; A2704-12. During the plant transformation, DNA fragments derived from pUC19 plasmid were integrated in A2704-12, and the region was found to be A2704-12 specific. The pUC19-derived DNA sequences were used as primers for the specific detection of A2704-12. We first tried to construct a standard plasmid for A2704-12 quantification using pUC19. However, non-specific signals appeared with both qualitative and quantitative PCR analyses using the specific primers with pUC19 as a template, and we then constructed a plasmid using pBR322. The conversion factor (C(f)), which is required to calculate the amount of the genetically modified organism (GMO), was experimentally determined with two real-time PCR instruments, the Applied Biosystems 7900HT and the Applied Biosystems 7500. The determined C(f) values were both 0.98. The quantitative method was evaluated by means of blind tests in multi-laboratory trials using the two real-time PCR instruments. The limit of quantitation for the method was estimated to be 0.1%. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSD(R)), and the determined bias and RSD(R) values for the method were each less than 20%. These results suggest that the developed method would be suitable for practical analyses for the detection and quantification of A2704-12.
Quantitative risk assessment for a glass fiber insulation product.
Fayerweather, W E; Bender, J R; Hadley, J G; Eastes, W
1997-04-01
California Proposition 65 (Prop65) provides a mechanism by which the manufacturer may perform a quantitative risk assessment to be used in determining the need for cancer warning labels. This paper presents a risk assessment under this regulation for professional and do-it-yourself insulation installers. It determines the level of insulation glass fiber exposure (specifically Owens Corning's R-25 PinkPlus with Miraflex) that, assuming a working lifetime exposure, poses no significant cancer risk under Prop65's regulations. "No significant risk" is defined under Prop65 as a lifetime risk of no more than one additional cancer case per 100,000 exposed persons, and nonsignificant exposure is defined as a working lifetime exposure associated with "no significant risk." This determination can be carried out despite the fact that the relevant underlying studies (i.e., chronic inhalation bioassays) of comparable glass wool fibers do not show tumorigenic activity. Nonsignificant exposures are estimated from (1) the most recent RCC chronic inhalation bioassay of nondurable fiberglass in rats; (2) intraperitoneal fiberglass injection studies in rats; (3) a distributional, decision analysis approach applied to four chronic inhalation rat bioassays of conventional fiberglass; (4) an extrapolation from the RCC chronic rat inhalation bioassay of durable refractory ceramic fibers; and (5) an extrapolation from the IOM chronic rat inhalation bioassay of durable E glass microfibers. When the EPA linear nonthreshold model is used, central estimates of nonsignificant exposure range from 0.36 fibers/cc (for the RCC chronic inhalation bioassay of fiberglass) through 21 fibers/cc (for the i.p. fiberglass injection studies). Lower 95% confidence bounds on these estimates vary from 0.17 fibers/cc through 13 fibers/cc. Estimates derived from the distributional approach or from applying the EPA linear nonthreshold model to chronic bioassays of durable fibers such as refractory ceramic fiber or E glass microfibers are intermediate to the other approaches. Estimates based on the Weibull 1.5-hit nonthreshold and 2-hit threshold models exceed by at least a factor of 10 the corresponding EPA linear nonthreshold estimates. The lowest nonsignificant exposures derived in this assessment are at least a factor of two higher than field exposures measured for professionals installing the R-25 fiberglass insulation product and are orders of magnitude higher than the estimated lifetime exposures for do-it-yourselfers.
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Feiveson, A. H.; Hall, F. G.; Bauer, M. E.; Davis, B. J.; Malila, W. A.; Rice, D. P.
1975-01-01
The CITARS was an experiment designed to quantitatively evaluate crop identification performance for corn and soybeans in various environments using a well-defined set of automatic data processing (ADP) techniques. Each technique was applied to data acquired to recognize and estimate proportions of corn and soybeans. The CITARS documentation summarizes, interprets, and discusses the crop identification performances obtained using (1) different ADP procedures; (2) a linear versus a quadratic classifier; (3) prior probability information derived from historic data; (4) local versus nonlocal recognition training statistics and the associated use of preprocessing; (5) multitemporal data; (6) classification bias and mixed pixels in proportion estimation; and (7) data with differnt site characteristics, including crop, soil, atmospheric effects, and stages of crop maturity.
Knapp, Julika; Allesch, Astrid; Müller, Wolfgang; Bockreis, Anke
2017-11-01
Recycling of waste materials is desirable to reduce the consumption of limited primary resources, but also includes the risk of recycling unwanted, hazardous substances. In Austria, the legal framework demands secondary products must not present a higher risk than comparable products derived from primary resources. However, the act provides no definition on how to assess this risk potential. This paper describes the development of different quantitative and qualitative methods to estimate the transfer of contaminants in recycling processes. The quantitative methods comprise the comparison of concentrations of harmful substances in recycling products to corresponding primary products and to existing limit values. The developed evaluation matrix, which considers further aspects, allows for the assessment of the qualitative risk potential. The results show that, depending on the assessed waste fraction, particular contaminants can be critical. Their concentrations were higher than in comparable primary materials and did not comply with existing limit values. On the other hand, the results show that a long-term, well-established quality control system can assure compliance with the limit values. The results of the qualitative assessment obtained with the evaluation matrix support the results of the quantitative assessment. Therefore, the evaluation matrix can be suitable to quickly screen waste streams used for recycling to estimate their potential environmental and health risks. To prevent the transfer of contaminants into product cycles, improved data of relevant substances in secondary resources are necessary. In addition, regulations for material recycling are required to assure adequate quality control measures, including limit values. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liang, Q.; Chipperfield, M.; Daniel, J. S.; Burkholder, J. B.; Rigby, M. L.; Velders, G. J. M.
2015-12-01
The hydroxyl radical (OH) is the major oxidant in the atmosphere. Reaction with OH is the primary removal process for many non-CO2greenhouse gases (GHGs), ozone-depleting substances (ODSs) and their replacements, e.g. hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs). Traditionally, the global OH abundance is inferred using the observed atmospheric rate of change for methyl chloroform (MCF). Due to the Montreal Protocol regulation, the atmospheric abundance of MCF has been decreasing rapidly to near-zero values. It is becoming critical to find an alternative reference compound to continue to provide quantitative information for the global OH abundance. Our model analysis using the NASA 3-D GEOS-5 Chemistry Climate Model suggests that the inter-hemispheric gradients (IHG) of the HCFCs and HFCs show a strong linear correlation with their global emissions. Therefore it is possible to use (i) the observed IHGs of HCFCs and HFCs to estimate their global emissions, and (ii) use the derived emissions and the observed long-term trend to calculate their lifetimes and to infer the global OH abundance. Preliminary analysis using a simple global two-box model (one box for each hemisphere) and information from the global 3-D model suggests that the quantitative relationship between IHG and global emissions varies slightly among individual compounds depending on their lifetime, their emissions history and emission fractions from the two hemispheres. While each compound shows different sensitivity to the above quantities, the combined suite of the HCFCs and HFCs provides a means to derive global OH abundance and the corresponding atmospheric lifetimes of long-lived gases with respect to OH (tOH). The fact that the OH partial lifetimes of these compounds are highly correlated, with the ratio of tOH equal to the reverse ratio of their OH thermal reaction rates at 272K, provides an additional constraint that can greatly reduce the uncertainty in the OH abundance and tOH estimates. We will use the observed IHGs and long-term trends of three major HCFCs and six major HFCs in the two-box model to derive their global emissions and atmospheric lifetimes as well as the global OH abundance. The derived global OH abundance between 2000 and 2014 will be compared with that derived using MCF for consistency.
Global estimates of country health indicators: useful, unnecessary, inevitable?
AbouZahr, Carla; Boerma, Ties; Hogan, Daniel
2017-01-01
ABSTRACT Background: The MDG era relied on global health estimates to fill data gaps and ensure temporal and cross-country comparability in reporting progress. Monitoring the Sustainable Development Goals will present new challenges, requiring enhanced capacities to generate, analyse, interpret and use country produced data. Objective: To summarize the development of global health estimates and discuss their utility and limitations from global and country perspectives. Design: Descriptive paper based on findings of intercountry workshops, reviews of literatureon and synthesis of experiences. Results: Producers of global health estimates focus on the technical soundness of estimation methods and comparability of the results across countries and over time. By contrast, country users are more concerned about the extent of their involvement in the estimation process and hesitate to buy into estimates derived using methods their technical staff cannot explain and that differ from national data sources. Quantitative summaries of uncertainty may be of limited practical use in policy discussions where decisions need to be made about what to do next. Conclusions: Greater transparency and involvement of country partners in the development of global estimates will help improve ownership, strengthen country capacities for data production and use, and reduce reliance on externally produced estimates. PMID:28532307
Biesbroek, Sander; Kneepkens, Mirjam C; van den Berg, Saskia W; Fransen, Heidi P; Beulens, Joline W; Peeters, Petra H M; Boer, Jolanda M A
2018-04-01
Higher-educated people often have healthier diets, but it is unclear whether specific dietary patterns exist within educational groups. We therefore aimed to derive dietary patterns in the total population and by educational level and to investigate whether these patterns differed in their composition and associations with the incidence of fatal and non-fatal CHD and stroke. Patterns were derived using principal components analysis in 36 418 participants of the European Prospective Investigation into Cancer and Nutrition-Netherlands cohort. Self-reported educational level was used to create three educational groups. Dietary intake was estimated using a validated semi-quantitative FFQ. Hazard ratios were estimated using Cox Proportional Hazard analysis after a mean follow-up of 16 years. In the three educational groups, similar 'Western', 'prudent' and 'traditional' patterns were derived as in the total population. However, with higher educational level a lower population-derived score for the 'Western' and 'traditional' patterns and a higher score on the 'prudent' pattern were observed. These differences in distribution of the factor scores illustrate the association between education and food consumption. After adjustments, no differences in associations between population-derived dietary patterns and the incidence of CHD or stroke were found between the educational groups (P interaction between 0·21 and 0·98). In conclusion, although in general population and educational groups-derived dietary patterns did not differ, small differences between educational groups existed in the consumption of food groups in participants considered adherent to the population-derived patterns (Q4). This did not result in different associations with incident CHD or stroke between educational groups.
A unifying theory for genetic epidemiological analysis of binary disease data
2014-01-01
Background Genetic selection for host resistance offers a desirable complement to chemical treatment to control infectious disease in livestock. Quantitative genetics disease data frequently originate from field studies and are often binary. However, current methods to analyse binary disease data fail to take infection dynamics into account. Moreover, genetic analyses tend to focus on host susceptibility, ignoring potential variation in infectiousness, i.e. the ability of a host to transmit the infection. This stands in contrast to epidemiological studies, which reveal that variation in infectiousness plays an important role in the progression and severity of epidemics. In this study, we aim at filling this gap by deriving an expression for the probability of becoming infected that incorporates infection dynamics and is an explicit function of both host susceptibility and infectiousness. We then validate this expression according to epidemiological theory and by simulating epidemiological scenarios, and explore implications of integrating this expression into genetic analyses. Results Our simulations show that the derived expression is valid for a range of stochastic genetic-epidemiological scenarios. In the particular case of variation in susceptibility only, the expression can be incorporated into conventional quantitative genetic analyses using a complementary log-log link function (rather than probit or logit). Similarly, if there is moderate variation in both susceptibility and infectiousness, it is possible to use a logarithmic link function, combined with an indirect genetic effects model. However, in the presence of highly infectious individuals, i.e. super-spreaders, the use of any model that is linear in susceptibility and infectiousness causes biased estimates. Thus, in order to identify super-spreaders, novel analytical methods using our derived expression are required. Conclusions We have derived a genetic-epidemiological function for quantitative genetic analyses of binary infectious disease data, which, unlike current approaches, takes infection dynamics into account and allows for variation in host susceptibility and infectiousness. PMID:24552188
A unifying theory for genetic epidemiological analysis of binary disease data.
Lipschutz-Powell, Debby; Woolliams, John A; Doeschl-Wilson, Andrea B
2014-02-19
Genetic selection for host resistance offers a desirable complement to chemical treatment to control infectious disease in livestock. Quantitative genetics disease data frequently originate from field studies and are often binary. However, current methods to analyse binary disease data fail to take infection dynamics into account. Moreover, genetic analyses tend to focus on host susceptibility, ignoring potential variation in infectiousness, i.e. the ability of a host to transmit the infection. This stands in contrast to epidemiological studies, which reveal that variation in infectiousness plays an important role in the progression and severity of epidemics. In this study, we aim at filling this gap by deriving an expression for the probability of becoming infected that incorporates infection dynamics and is an explicit function of both host susceptibility and infectiousness. We then validate this expression according to epidemiological theory and by simulating epidemiological scenarios, and explore implications of integrating this expression into genetic analyses. Our simulations show that the derived expression is valid for a range of stochastic genetic-epidemiological scenarios. In the particular case of variation in susceptibility only, the expression can be incorporated into conventional quantitative genetic analyses using a complementary log-log link function (rather than probit or logit). Similarly, if there is moderate variation in both susceptibility and infectiousness, it is possible to use a logarithmic link function, combined with an indirect genetic effects model. However, in the presence of highly infectious individuals, i.e. super-spreaders, the use of any model that is linear in susceptibility and infectiousness causes biased estimates. Thus, in order to identify super-spreaders, novel analytical methods using our derived expression are required. We have derived a genetic-epidemiological function for quantitative genetic analyses of binary infectious disease data, which, unlike current approaches, takes infection dynamics into account and allows for variation in host susceptibility and infectiousness.
NASA Astrophysics Data System (ADS)
Sigmund, Peter
The mean equililibrium charge of a penetrating ion can be estimated on the basis of Bohr's velocity criterion or Lamb's energy criterion. Qualitative and quantitative results are derived on the basis of the Thomas-Fermi model of the atom, which is discussed explicitly. This includes a brief introduction to the Thomas-Fermi-Dirac model. Special attention is paid to trial function approaches by Lenz and Jensen as well as Brandt and Kitagawa. The chapter also offers a preliminary discussion of the role of the stopping medium, gas-solid differences, and a survey of data compilations.
Gartzke, J; Burck, D
1989-06-01
A thin-layer chromatographic method is described for the determination of mandelic and phenyglyoxillic acid on silicagel (Silufol UV 254) after extraction from urine of styrene exposed workers. The quantitative determination was performed after eluting the spots. Phenylglyoxilic acid was measured at 255 nm and mandelic acid by derivative spectroscopically estimation of the .CH(OH).COOH -chromophore at 217 nm or by a three-wavelength mode, respectively. The recovery in urine was 80-104% for phenylglyoxilic acid and 99-105% for mandelic acid.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-ORD-2009-0694; FRL-9442-8] Notice of Availability of the External Review Draft of the Guidance for Applying Quantitative Data to Develop Data-Derived Extrapolation... Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies Extrapolation...
Remote Determination of Auroral Energy Characteristics During Substorm Activity
NASA Technical Reports Server (NTRS)
Germany, G. A.; Parks, G. K.; Brittnacher, M. J.; Cumnock, J.; Lummerzheim, D.; Spann, J. F., Jr.
1997-01-01
Ultraviolet auroral images from the Ultraviolet Imager onboard the POLAR satellite can be used as quantitative remote diagnostics of the auroral regions, yielding estimates of incident energy characteristics, compositional changes, and other higher order data products. In particular, images of long and short wavelength N2 Lyman-Birge-Hopfield (LBH) emissions can be modeled to obtain functions of energy flux and average energy that are basically insensitive to changes in seasonal and solar activity changes. This technique is used in this study to estimate incident electron energy flux and average energy during substorm activity occurring on May 19, 1996. This event was simultaneously observed by WIND, GEOTAIL, INTERBALL, DMSP and NOAA spacecraft as well as by POLAR. Here incident energy estimates derived from Ultraviolet Imager (UVI) are compared with in situ measurements of the same parameters from an overflight by the DMSP F12 satellite coincident with the UVI image times.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Chase, Robert
1992-01-01
Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.
An information measure for class discrimination. [in remote sensing of crop observation
NASA Technical Reports Server (NTRS)
Shen, S. S.; Badhwar, G. D.
1986-01-01
This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.
Petty, Stephen E; Nicas, Mark; Boiarski, Anthony A
2011-01-01
This study examines a method for estimating the dermal absorption of benzene contained in hydrocarbon liquids that contact the skin. This method applies to crude oil, gasoline, organic solvents, penetrants, and oils. The flux of benzene through occluded skin as a function of the percent vol/vol benzene in the liquid is derived by fitting a curve to experimental data; the function is supralinear at benzene concentrations < or = 5% vol/vol. When a liquid other than pure benzene is on nonoccluded skin, benzene may preferentially evaporate from the liquid, which thereby decreases the benzene flux. We present a time-averaging method here for estimating the reduced dermal flux during evaporation. Example calculations are presented for benzene at 2% vol/vol in gasoline, and for benzene at 0.1% vol/vol in a less volatile liquid. We also discuss other factors affecting dermal absorption.
Zanzonico, Pat; Carrasquillo, Jorge A; Pandit-Taskar, Neeta; O'Donoghue, Joseph A; Humm, John L; Smith-Jones, Peter; Ruan, Shutian; Divgi, Chaitanya; Scott, Andrew M; Kemeny, Nancy E; Fong, Yuman; Wong, Douglas; Scheinberg, David; Ritter, Gerd; Jungbluth, Achem; Old, Lloyd J; Larson, Steven M
2015-10-01
The molecular specificity of monoclonal antibodies (mAbs) directed against tumor antigens has proven effective for targeted therapy of human cancers, as shown by a growing list of successful antibody-based drug products. We describe a novel, nonlinear compartmental model using PET-derived data to determine the "best-fit" parameters and model-derived quantities for optimizing biodistribution of intravenously injected (124)I-labeled antitumor antibodies. As an example of this paradigm, quantitative image and kinetic analyses of anti-A33 humanized mAb (also known as "A33") were performed in 11 colorectal cancer patients. Serial whole-body PET scans of (124)I-labeled A33 and blood samples were acquired and the resulting tissue time-activity data for each patient were fit to a nonlinear compartmental model using the SAAM II computer code. Excellent agreement was observed between fitted and measured parameters of tumor uptake, "off-target" uptake in bowel mucosa, blood clearance, tumor antigen levels, and percent antigen occupancy. This approach should be generally applicable to antibody-antigen systems in human tumors for which the masses of antigen-expressing tumor and of normal tissues can be estimated and for which antibody kinetics can be measured with PET. Ultimately, based on each patient's resulting "best-fit" nonlinear model, a patient-specific optimum mAb dose (in micromoles, for example) may be derived.
Estimated areal extent of colonies of black-tailed prairie dogs in the northern Great Plains
Sidle, John G.; Johnson, Douglas H.; Euliss, Betty R.
2001-01-01
During 1997–1998, we undertook an aerial survey, with an aerial line-intercept technique, to estimate the extent of colonies of black-tailed prairie dogs (Cynomys ludovicianus) in the northern Great Plains states of Nebraska, North Dakota, South Dakota, and Wyoming. We stratified the survey based on knowledge of colony locations, computed 2 types of estimates for each stratum, and combined ratio estimates for high-density strata with average density estimates for low-density strata. Estimates of colony areas for black-tailed prairie dogs were derived from the average percentages of lines intercepting prairie dog colonies and ratio estimators. We selected the best estimator based on the correlation between length of transect line and length of intercepted colonies. Active colonies of black-tailed prairie dogs occupied 2,377.8 km2 ± 186.4 SE, whereas inactive colonies occupied 560.4 ± 89.2 km2. These data represent the 1st quantitative assessment of black-tailed prairie dog colonies in the northern Great Plains. The survey dispels popular notions that millions of hectares of colonies of black-tailed prairie dogs exist in the northern Great Plains and can form the basis for future survey efforts.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
A Preliminary Examination of the Second Generation CMORPH Real-time Production
NASA Astrophysics Data System (ADS)
Joyce, R.; Xie, P.; Wu, S.
2017-12-01
The second generation CMORPH (CMORPH2) has started test real-time production of 30-minute precipitation estimates on a 0.05olat/lon grid over the entire globe, from pole-to-pole. The CMORPH2 is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) and LEO platforms, and precipitation simulations from the NCEP operational global forecast system (GFS). Inputs from the various sources are first inter-calibrated to ensure quantitative consistencies in representing precipitation events of different intensities through PDF calibration against a common reference standard. The inter-calibrated PMW retrievals and IR-based precipitation estimates are then propagated from their respective observation times to the target analysis time along the motion vectors of the precipitating clouds. Motion vectors are first derived separately from the satellite IR based precipitation estimates and the GFS precipitation fields. These individually derived motion vectors are then combined through a 2D-VAR technique to form an analyzed field of cloud motion vectors over the entire globe. The propagated PMW and IR based precipitation estimates are finally integrated into a single field of global precipitation through the Kalman Filter framework. A set of procedures have been established to examine the performance of the CMORPH2 real-time production. CMORPH2 satellite precipitation estimates are compared against the CPC daily gauge analysis, Stage IV radar precipitation over the CONUS, and numerical model forecasts to discover potential shortcomings and quantify improvements against the first generation CMORPH. Special attention has been focused on the CMORPH behavior over high-latitude areas beyond the coverage of the first generation CMORPH. Detailed results will be reported at the AGU.
Second-harmonic diffraction from holographic volume grating.
Nee, Tsu-Wei
2006-10-01
The full polarization property of holographic volume-grating enhanced second-harmonic diffraction (SHD) is investigated theoretically. The nonlinear coefficient is derived from a simple atomic model of the material. By using a simple volume-grating model, the SHD fields and Mueller matrices are first derived. The SHD phase-mismatching effect for a thick sample is analytically investigated. This theory is justified by fitting with published experimental SHD data of thin-film samples. The SHD of an existing polymethyl methacrylate (PMMA) holographic 2-mm-thick volume-grating sample is investigated. This sample has two strong coupling linear diffraction peaks and five SHD peaks. The splitting of SHD peaks is due to the phase-mismatching effect. The detector sensitivity and laser power needed to measure these peak signals are quantitatively estimated.
A unified perspective on robot control - The energy Lyapunov function approach
NASA Technical Reports Server (NTRS)
Wen, John T.
1990-01-01
A unified framework for the stability analysis of robot tracking control is presented. By using an energy-motivated Lyapunov function candidate, the closed-loop stability is shown for a large family of control laws sharing a common structure of proportional and derivative feedback and a model-based feedforward. The feedforward can be zero, partial or complete linearized dynamics, partial or complete nonlinear dynamics, or linearized or nonlinear dynamics with parameter adaptation. As result, the dichotomous approaches to the robot control problem based on the open-loop linearization and nonlinear Lyapunov analysis are both included in this treatment. Furthermore, quantitative estimates of the trade-offs between different schemes in terms of the tracking performance, steady state error, domain of convergence, realtime computation load and required a prior model information are derived.
Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara; Reale, Oreste
2002-01-01
Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.
NASA Technical Reports Server (NTRS)
Sabine, Charles; Realmuto, Vincent J.; Taranik, James V.
1994-01-01
We have produced images that quantitatively depict modal and chemical parameters of granitoids using an image processing algorithm called MINMAP that fits Gaussian curves to normalized emittance spectra recovered from thermal infrared multispectral scanner (TIMS) radiance data. We applied the algorithm to TIMS data from the Desolation Wilderness, an extensively glaciated area near the northern end of the Sierra Nevada batholith that is underlain by Jurassic and Cretaceous plutons that range from diorite and anorthosite to leucogranite. The wavelength corresponding to the calculated emittance minimum lambda(sub min) varies linearly with quartz content, SiO2, and other modal and chemical parameters. Thematic maps of quartz and silica content derived from lambda(sub min) values distinguish bodies of diorite from surrounding granite, identify outcrops of anorthosite, and separate felsic, intermediate, and mafic rocks.
Lei, M H; Chen, J J; Ko, Y L; Cheng, J J; Kuan, P; Lien, W P
1995-01-01
This study assessed the usefulness of continuous wave Doppler echocardiography and color flow mapping in evaluating pulmonary regurgitation (PR) and estimating pulmonary artery (PA) pressure. Forty-three patients were examined, and high quality Doppler spectral recordings of PR were obtained in 32. All patients underwent cardiac catheterization, and simultaneous PA and right ventricular (RV) pressures were recorded in 17. Four Doppler regurgitant flow velocity patterns were observed: pandiastolic plateau, biphasic, peak and plateau, and early diastolic triangular types. The peak diastolic and end-diastolic PA-to-RV pressure gradients derived from the Doppler flow profiles correlated well with the catheter measurements (r = 0.95 and r = 0.95, respectively). As PA pressure increased, the PR flow velocity became higher; a linear relationship between either systolic or mean PA pressure and Doppler-derived peak diastolic pressure gradient was noted (r = 0.90 and 0.94, respectively). Based on peak diastolic gradients of < 15, 15-30 or > 30 mm Hg, patients could be separated as those with mild, moderate or severe pulmonary hypertension, respectively (p < 0.05). A correlation was also observed between PA diastolic pressure and Doppler-derived end-diastolic pressure gradient (r = 0.91). Moreover, the Doppler velocity decay slope of PR closely correlated with that derived from the catheter method (r = 0.98). The decay slope tended to be steeper with the increment in regurgitant jet area and length obtained from color flow mapping. In conclusion, continuous wave Doppler evaluation of PR is a useful means for noninvasive estimation of PA pressure, and the Doppler velocity decay slope seems to reflect the severity of PR.
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2015-12-01
Intensity-Duration-Frequency (IDF) curves are widely used in flood risk management because they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. Weather radars provide distributed rainfall estimates with high spatial and temporal resolutions and overcome the scarce representativeness of point-based rainfall for regions characterized by large gradients in rainfall climatology. This work explores the use of radar quantitative precipitation estimation (QPE) for the identification of IDF curves over a region with steep climatic transitions (Israel) using a unique radar data record (23 yr) and combined physical and empirical adjustment of the radar data. IDF relationships were derived by fitting a generalized extreme value distribution to the annual maximum series for durations of 20 min, 1 h and 4 h. Arid, semi-arid and Mediterranean climates were explored using 14 study cases. IDF curves derived from the study rain gauges were compared to those derived from radar and from nearby rain gauges characterized by similar climatology, taking into account the uncertainty linked with the fitting technique. Radar annual maxima and IDF curves were generally overestimated but in 70% of the cases (60% for a 100 yr return period), they lay within the rain gauge IDF confidence intervals. Overestimation tended to increase with return period, and this effect was enhanced in arid climates. This was mainly associated with radar estimation uncertainty, even if other effects, such as rain gauge temporal resolution, cannot be neglected. Climatological classification remained meaningful for the analysis of rainfall extremes and radar was able to discern climatology from rainfall frequency analysis.
Biomass burning fuel consumption dynamics in the tropics and subtropics assessed from satellite
NASA Astrophysics Data System (ADS)
Andela, Niels; van der Werf, Guido R.; Kaiser, Johannes W.; van Leeuwen, Thijs T.; Wooster, Martin J.; Lehmann, Caroline E. R.
2016-06-01
Landscape fires occur on a large scale in (sub)tropical savannas and grasslands, affecting ecosystem dynamics, regional air quality and concentrations of atmospheric trace gasses. Fuel consumption per unit of area burned is an important but poorly constrained parameter in fire emission modelling. We combined satellite-derived burned area with fire radiative power (FRP) data to derive fuel consumption estimates for land cover types with low tree cover in South America, Sub-Saharan Africa, and Australia. We developed a new approach to estimate fuel consumption, based on FRP data from the polar-orbiting Moderate Resolution Imaging Spectroradiometer (MODIS) and the geostationary Spinning Enhanced Visible and Infrared Imager (SEVIRI) in combination with MODIS burned-area estimates. The fuel consumption estimates based on the geostationary and polar-orbiting instruments showed good agreement in terms of spatial patterns. We used field measurements of fuel consumption to constrain our results, but the large variation in fuel consumption in both space and time complicated this comparison and absolute fuel consumption estimates remained more uncertain. Spatial patterns in fuel consumption could be partly explained by vegetation productivity and fire return periods. In South America, most fires occurred in savannas with relatively long fire return periods, resulting in comparatively high fuel consumption as opposed to the more frequently burning savannas in Sub-Saharan Africa. Strikingly, we found the infrequently burning interior of Australia to have higher fuel consumption than the more productive but frequently burning savannas in northern Australia. Vegetation type also played an important role in explaining the distribution of fuel consumption, by affecting both fuel build-up rates and fire return periods. Hummock grasslands, which were responsible for a large share of Australian biomass burning, showed larger fuel build-up rates than equally productive grasslands in Africa, although this effect might have been partially driven by the presence of grazers in Africa or differences in landscape management. Finally, land management in the form of deforestation and agriculture also considerably affected fuel consumption regionally. We conclude that combining FRP and burned-area estimates, calibrated against field measurements, is a promising approach in deriving quantitative estimates of fuel consumption. Satellite-derived fuel consumption estimates may both challenge our current understanding of spatiotemporal fuel consumption dynamics and serve as reference datasets to improve biogeochemical modelling approaches. Future field studies especially designed to validate satellite-based products, or airborne remote sensing, may further improve confidence in the absolute fuel consumption estimates which are quickly becoming the weakest link in fire emission estimates.
NASA Technical Reports Server (NTRS)
Asner, Gregory P.; Keller, Michael M.; Silva, Jose Natalino; Zweede, Johan C.; Pereira, Rodrigo, Jr.
2002-01-01
Major uncertainties exist regarding the rate and intensity of logging in tropical forests worldwide: these uncertainties severely limit economic, ecological, and biogeochemical analyses of these regions. Recent sawmill surveys in the Amazon region of Brazil show that the area logged is nearly equal to total area deforested annually, but conversion of survey data to forest area, forest structural damage, and biomass estimates requires multiple assumptions about logging practices. Remote sensing could provide an independent means to monitor logging activity and to estimate the biophysical consequences of this land use. Previous studies have demonstrated that the detection of logging in Amazon forests is difficult and no studies have developed either the quantitative physical basis or remote sensing approaches needed to estimate the effects of various logging regimes on forest structure. A major reason for these limitations has been a lack of sufficient, well-calibrated optical satellite data, which in turn, has impeded the development and use of physically-based, quantitative approaches for detection and structural characterization of forest logging regimes. We propose to use data from the EO-1 Hyperion imaging spectrometer to greatly increase our ability to estimate the presence and structural attributes of selective logging in the Amazon Basin. Our approach is based on four "biogeophysical indicators" not yet derived simultaneously from any satellite sensor: 1) green canopy leaf area index; 2) degree of shadowing; 3) presence of exposed soil and; 4) non-photosynthetic vegetation material. Airborne, field and modeling studies have shown that the optical reflectance continuum (400-2500 nm) contains sufficient information to derive estimates of each of these indicators. Our ongoing studies in the eastern Amazon basin also suggest that these four indicators are sensitive to logging intensity. Satellite-based estimates of these indicators should provide a means to quantify both the presence and degree of structural disturbance caused by various logging regimes. Our quantitative assessment of Hyperion hyperspectral and ALI multi-spectral data for the detection and structural characterization of selective logging in Amazonia will benefit from data collected through an ongoing project run by the Tropical Forest Foundation, within which we have developed a study of the canopy and landscape biophysics of conventional and reduced-impact logging. We will add to our base of forest structural information in concert with an EO-1 overpass. Using a photon transport model inversion technique that accounts for non-linear mixing of the four biogeophysical indicators, we will estimate these parameters across a gradient of selective logging intensity provided by conventional and reduced impact logging sites. We will also compare our physical ly-based approach to both conventional (e.g., NDVI) and novel (e.g., SWIR-channel) vegetation indices as well as to linear mixture modeling methods. We will cross-compare these approaches using Hyperion and ALI imagers to determine the strengths and limitations of these two sensors for applications of forest biophysics. This effort will yield the first physical ly-based, quantitative analysis of the detection and intensity of selective logging in Amazonia, comparing hyperspectral and improved multi-spectral approaches as well as inverse modeling, linear mixture modeling, and vegetation index techniques.
NASA Astrophysics Data System (ADS)
Koeppe, Robert Allen
Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.
NASA Technical Reports Server (NTRS)
Ganachaud, Alexandre; Wunsch, Carl; Kim, Myung-Chan; Tapley, Byron
1997-01-01
A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceanographic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations. Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.
A Global Geospatial Ecosystem Services Estimate of Urban Agriculture
NASA Astrophysics Data System (ADS)
Clinton, Nicholas; Stuhlmacher, Michelle; Miles, Albie; Uludere Aragon, Nazli; Wagner, Melissa; Georgescu, Matei; Herwig, Chris; Gong, Peng
2018-01-01
Though urban agriculture (UA), defined here as growing of crops in cities, is increasing in popularity and importance globally, little is known about the aggregate benefits of such natural capital in built-up areas. Here, we introduce a quantitative framework to assess global aggregate ecosystem services from existing vegetation in cities and an intensive UA adoption scenario based on data-driven estimates of urban morphology and vacant land. We analyzed global population, urban, meteorological, terrain, and Food and Agriculture Organization (FAO) datasets in Google Earth Engine to derive global scale estimates, aggregated by country, of services provided by UA. We estimate the value of four ecosystem services provided by existing vegetation in urban areas to be on the order of 33 billion annually. We project potential annual food production of 100-180 million tonnes, energy savings ranging from 14 to 15 billion kilowatt hours, nitrogen sequestration between 100,000 and 170,000 tonnes, and avoided storm water runoff between 45 and 57 billion cubic meters annually. In addition, we estimate that food production, nitrogen fixation, energy savings, pollination, climate regulation, soil formation and biological control of pests could be worth as much as 80-160 billion annually in a scenario of intense UA implementation. Our results demonstrate significant country-to-country variability in UA-derived ecosystem services and reduction of food insecurity. These estimates represent the first effort to consistently quantify these incentives globally, and highlight the relative spatial importance of built environments to act as change agents that alleviate mounting concerns associated with global environmental change and unsustainable development.
NASA Astrophysics Data System (ADS)
Nawar, Said; Buddenbaum, Henning; Hill, Joachim
2014-05-01
A rapid and inexpensive soil analytical technique is needed for soil quality assessment and accurate mapping. This study investigated a method for improved estimation of soil clay (SC) and organic matter (OM) using reflectance spectroscopy. Seventy soil samples were collected from Sinai peninsula in Egypt to estimate the soil clay and organic matter relative to the soil spectra. Soil samples were scanned with an Analytical Spectral Devices (ASD) spectrometer (350-2500 nm). Three spectral formats were used in the calibration models derived from the spectra and the soil properties: (1) original reflectance spectra (OR), (2) first-derivative spectra smoothened using the Savitzky-Golay technique (FD-SG) and (3) continuum-removed reflectance (CR). Partial least-squares regression (PLSR) models using the CR of the 400-2500 nm spectral region resulted in R2 = 0.76 and 0.57, and RPD = 2.1 and 1.5 for estimating SC and OM, respectively, indicating better performance than that obtained using OR and SG. The multivariate adaptive regression splines (MARS) calibration model with the CR spectra resulted in an improved performance (R2 = 0.89 and 0.83, RPD = 3.1 and 2.4) for estimating SC and OM, respectively. The results show that the MARS models have a great potential for estimating SC and OM compared with PLSR models. The results obtained in this study have potential value in the field of soil spectroscopy because they can be applied directly to the mapping of soil properties using remote sensing imagery in arid environment conditions. Key Words: soil clay, organic matter, PLSR, MARS, reflectance spectroscopy.
NASA Astrophysics Data System (ADS)
Pan, Y.; Shen, W.; Hwang, C.
2015-12-01
As an elastic Earth, the surface vertical deformation is in response to hydrological mass change on or near Earth's surface. The continuous GPS (CGPS) records show surface vertical deformations which are significant information to estimate the variation of terrestrial water storage. We compute the loading deformations at GPS stations based on synthetic models of seasonal water load distribution and then invert the synthetic GPS data for surface mass distribution. We use GRACE gravity observations and hydrology models to evaluate seasonal water storage variability in Nepal and Himalayas. The coherence among GPS inversion results, GRACE and hydrology models indicate that GPS can provide quantitative estimates of terrestrial water storage variations by inverting the surface deformation observations. The annual peak-to-peak surface mass change derived from GPS and GRACE results reveal seasonal loads oscillations of water, snow and ice. Meanwhile, the present uplifting of Nepal and Himalayas indicates the hydrology mass loss. This study is supported by National 973 Project China (grant Nos. 2013CB733302 and 2013CB733305), NSFC (grant Nos. 41174011, 41429401, 41210006, 41128003, 41021061).
Quantitative computed tomography assessment of transfusional iron overload.
Wood, John C; Mo, Ashley; Gera, Aakansha; Koh, Montre; Coates, Thomas; Gilsanz, Vicente
2011-06-01
Quantitative computed tomography (QCT) has been proposed for iron quantification for more than 30 years, however there has been little clinical validation. We compared liver attenuation by QCT with magnetic resonance imaging (MRI)-derived estimates of liver iron concentration (LIC) in 37 patients with transfusional siderosis. MRI and QCT measurements were performed as clinically indicated monitoring of LIC and vertebral bone-density respectively, over a 6-year period. Mean time difference between QCT and MRI studies was 14 d, with 25 studies performed on the same day. For liver attenuation outside the normal range, attenuation values rose linearly with LIC (r(2) = 0·94). However, intersubject variability in intrinsic liver attenuation prevented quantitation of LIC <8 mg/g dry weight of liver, and was the dominant source of measurement uncertainty. Calculated QCT and MRI accuracies were equivalent for LIC values approaching 22 mg/g dry weight, with QCT having superior performance at higher LIC's. Although not suitable for monitoring patients with good iron control, QCT may nonetheless represent a viable technique for liver iron quantitation in patients with moderate to severe iron in regions where MRI resources are limited because of its low cost, availability, and high throughput. © 2011 Blackwell Publishing Ltd.
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.
Moray, Neville; Groeger, John; Stanton, Neville
2017-02-01
This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.
Ko, Dae-Hyun; Ji, Misuk; Kim, Sollip; Cho, Eun-Jung; Lee, Woochang; Yun, Yeo-Min; Chun, Sail; Min, Won-Ki
2016-01-01
The results of urine sediment analysis have been reported semiquantitatively. However, as recent guidelines recommend quantitative reporting of urine sediment, and with the development of automated urine sediment analyzers, there is an increasing need for quantitative analysis of urine sediment. Here, we developed a protocol for urine sediment analysis and quantified the results. Based on questionnaires, various reports, guidelines, and experimental results, we developed a protocol for urine sediment analysis. The results of this new protocol were compared with those obtained with a standardized chamber and an automated sediment analyzer. Reference intervals were also estimated using new protocol. We developed a protocol with centrifugation at 400 g for 5 min, with the average concentration factor of 30. The correlation between quantitative results of urine sediment analysis, the standardized chamber, and the automated sediment analyzer were generally good. The conversion factor derived from the new protocol showed a better fit with the results of manual count than the default conversion factor in the automated sediment analyzer. We developed a protocol for manual urine sediment analysis to quantitatively report the results. This protocol may provide a mean for standardization of urine sediment analysis.
Konigsfeld, Katie M; Lee, Melissa; Urata, Sarah M; Aguilera, Joe A; Milligan, Jamie R
2012-03-01
Electron deficient guanine radical species are major intermediates produced in DNA by the direct effect of ionizing irradiation. There is evidence that they react with amine groups in closely bound ligands to form covalent crosslinks. Crosslink formation is very poorly characterized in terms of quantitative rate and yield data. We sought to address this issue by using oligo-arginine ligands to model the close association of DNA and its binding proteins in chromatin. Guanine radicals were prepared in plasmid DNA by single electron oxidation. The product distribution derived from them was assayed by strand break formation after four different post-irradiation incubations. We compared the yields of DNA damage produced in the presence of four ligands in which neither, one, or both of the amino and carboxylate termini were blocked with amides. Free carboxylate groups were unreactive. Significantly higher yields of heat labile sites were observed when the amino terminus was unblocked. The rate of the reaction was characterized by diluting the unblocked amino group with its amide blocked derivative. These observations provide a means to develop quantitative estimates for the yields in which these labile sites are formed in chromatin by exposure to ionizing irradiation.
CLICK: The new USGS center for LIDAR information coordination and knowledge
Stoker, Jason M.; Greenlee, Susan K.; Gesch, Dean B.; Menig, Jordan C.
2006-01-01
Elevation data is rapidly becoming an important tool for the visualization and analysis of geographic information. The creation and display of three-dimensional models representing bare earth, vegetation, and structures have become major requirements for geographic research in the past few years. Light Detection and Ranging (lidar) has been increasingly accepted as an effective and accurate technology for acquiring high-resolution elevation data for bare earth, vegetation, and structures. Lidar is an active remote sensing system that records the distance, or range, of a laser fi red from an airborne or space borne platform such as an airplane, helicopter or satellite to objects or features on the Earth’s surface. By converting lidar data into bare ground topography and vegetation or structural morphologic information, extremely accurate, high-resolution elevation models can be derived to visualize and quantitatively represent scenes in three dimensions. In addition to high-resolution digital elevation models (Evans et al., 2001), other lidar-derived products include quantitative estimates of vegetative features such as canopy height, canopy closure, and biomass (Lefsky et al., 2002), and models of urban areas such as building footprints and three-dimensional city models (Maas, 2001).
Setting population targets for mammals using body mass as a predictor of population persistence.
Hilbers, Jelle P; Santini, Luca; Visconti, Piero; Schipper, Aafke M; Pinto, Cecilia; Rondinini, Carlo; Huijbregts, Mark A J
2017-04-01
Conservation planning and biodiversity assessments need quantitative targets to optimize planning options and assess the adequacy of current species protection. However, targets aiming at persistence require population-specific data, which limit their use in favor of fixed and nonspecific targets, likely leading to unequal distribution of conservation efforts among species. We devised a method to derive equitable population targets; that is, quantitative targets of population size that ensure equal probabilities of persistence across a set of species and that can be easily inferred from species-specific traits. In our method, we used models of population dynamics across a range of life-history traits related to species' body mass to estimate minimum viable population targets. We applied our method to a range of body masses of mammals, from 2 g to 3825 kg. The minimum viable population targets decreased asymptotically with increasing body mass and were on the same order of magnitude as minimum viable population estimates from species- and context-specific studies. Our approach provides a compromise between pragmatic, nonspecific population targets and detailed context-specific estimates of population viability for which only limited data are available. It enables a first estimation of species-specific population targets based on a readily available trait and thus allows setting equitable targets for population persistence in large-scale and multispecies conservation assessments and planning. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Asbestos exposure--quantitative assessment of risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, J.M.; Weill, H.
Methods for deriving quantitative estimates of asbestos-associated health risks are reviewed and their numerous assumptions and uncertainties described. These methods involve extrapolation of risks observed at past relatively high asbestos concentration levels down to usually much lower concentration levels of interest today--in some cases, orders of magnitude lower. These models are used to calculate estimates of the potential risk to workers manufacturing asbestos products and to students enrolled in schools containing asbestos products. The potential risk to workers exposed for 40 yr to 0.5 fibers per milliliter (f/ml) of mixed asbestos fiber type (a permissible workplace exposure limit under considerationmore » by the Occupational Safety and Health Administration (OSHA) ) are estimated as 82 lifetime excess cancers per 10,000 exposed. The risk to students exposed to an average asbestos concentration of 0.001 f/ml of mixed asbestos fiber types for an average enrollment period of 6 school years is estimated as 5 lifetime excess cancers per one million exposed. If the school exposure is to chrysotile asbestos only, then the estimated risk is 1.5 lifetime excess cancers per million. Risks from other causes are presented for comparison; e.g., annual rates (per million) of 10 deaths from high school football, 14 from bicycling (10-14 yr of age), 5 to 20 for whooping cough vaccination. Decisions concerning asbestos products require participation of all parties involved and should only be made after a scientifically defensible estimate of the associated risk has been obtained. In many cases to date, such decisions have been made without adequate consideration of the level of risk or the cost-effectiveness of attempts to lower the potential risk. 73 references.« less
Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L.
2013-01-01
Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays. PMID:23737753
Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L
2013-05-01
Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.
Image-derived input function with factor analysis and a-priori information.
Simončič, Urban; Zanotti-Fregonara, Paolo
2015-02-01
Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.
MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.
NASA Technical Reports Server (NTRS)
Chapman, Bruce; Celi, Jorge; Hamilton, Steve; McDonald, Kyle
2013-01-01
UAVSAR, NASA's airborne Synthetic Aperture Radar (SAR), conducted an extended observational campaign in Central and South America in March 2013, primarily related to volcanic deformations along the Andean Mountain Range but also including a large number of flights studying other scientific phenomena. During this campaign, the L-Band SAR collected data over the Napo River in Ecuador. The objectives of this experiment were to acquire polarimetric and interferometric L-Band SAR data over an inundated tropical forest in Ecuador simultaneously with on-the-ground field work ascertaining the extent of inundation, and to then derive from this data a quantitative estimate for the error in the SAR-derived inundation extent. In this paper, we will first describe the processing and preliminary analysis of the SAR data. The polarimetric SAR data will be classified by land cover and inundation state. The interferometric SAR data will be used to identify those areas where change in inundation extent occurred, and to measure the change in water level between two observations separated by a week. Second, we will describe the collection of the field estimates of inundation, and have preliminary comparisons of inundation extent measured in the field field versus that estimated from the SAR data.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Counting the cost: estimating the economic benefit of pedophile treatment programs.
Shanahan, M; Donato, R
2001-04-01
The principal objective of this paper is to identify the economic costs and benefits of pedophile treatment programs incorporating both the tangible and intangible cost of sexual abuse to victims. Cost estimates of cognitive behavioral therapy programs in Australian prisons are compared against the tangible and intangible costs to victims of being sexually abused. Estimates are prepared that take into account a number of problematic issues. These include the range of possible recidivism rates for treatment programs; the uncertainty surrounding the number of child sexual molestation offences committed by recidivists; and the methodological problems associated with estimating the intangible costs of sexual abuse on victims. Despite the variation in parameter estimates that impact on the cost-benefit analysis of pedophile treatment programs, it is found that potential range of economic costs from child sexual abuse are substantial and the economic benefits to be derived from appropriate and effective treatment programs are high. Based on a reasonable set of parameter estimates, in-prison, cognitive therapy treatment programs for pedophiles are likely to be of net benefit to society. Despite this, a critical area of future research must include further methodological developments in estimating the quantitative impact of child sexual abuse in the community.
Variability of serial same-day left ventricular ejection fraction using quantitative gated SPECT.
Vallejo, Enrique; Chaya, Hugo; Plancarte, Gerardo; Victoria, Diana; Bialostozky, David
2002-01-01
The accuracy of quantitative gated single photon emission computed tomography (SPECT) (QGS) and the potential limitations for estimation of left ventricular ejection fraction (LVEF) have been extensively evaluated. However, few studies have focused on the serial variability of QGS. This study was conducted to assess the serial variability of QGS for determination of LVEF between 2 sequential technetium 99m sestamibi-gated SPECT acquisitions at rest in both healthy and unhealthy subjects. The study population consisted of 2 groups: group I included 21 volunteers with a low likelihood of CAD, and group II included 22 consecutive patients with documented CAD. Both groups underwent serial SPECT imaging. The overall correlation between sequential images was high (r = 0.94, SEE = 5.3%), and the mean serial variability of LVEF was 5.15% +/- 3.51%. Serial variability was lower for images with high counts (3.45% +/- 3.23%) than for images with low counts (6.85% +/- 3.77%). The mean serial variability was not different between normal and abnormal high-dose images (3.0% +/- 1.56% vs 3.9% +/- 2.77%). However, mean serial variability for images derived from abnormal low-dose images was significantly greater than that derived from normal low-dose images (9.6% +/- 2.22% vs 3.1% +/- 2.12%, P <.05). Although QGS is an efficacious method to approximate LVEF values and is extremely valuable for incremental risk stratification of patients with coronary artery disease, it has significant variability in the estimation of LVEF on serial images. This should be taken into account when used for serial evaluation of LVEF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaotong; Liu, Jiaen; Van de Moortele, Pierre-Francois
2014-12-15
Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate themore » feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.« less
The Sense of Confidence during Probabilistic Learning: A Normative Account.
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-06-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable "feeling of knowing" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process.
The Sense of Confidence during Probabilistic Learning: A Normative Account
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-01-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process. PMID:26076466
Sun, Phillip Zhe; Wang, Yu; Dai, ZhuoZhi; Xiao, Gang; Wu, Renhua
2014-01-01
Chemical exchange saturation transfer (CEST) MRI is sensitive to dilute proteins and peptides as well as microenvironmental properties. However, the complexity of the CEST MRI effect, which varies with the labile proton content, exchange rate and experimental conditions, underscores the need for developing quantitative CEST (qCEST) analysis. Towards this goal, it has been shown that omega plot is capable of quantifying paramagnetic CEST MRI. However, the use of the omega plot is somewhat limited for diamagnetic CEST (DIACEST) MRI because it is more susceptible to direct radio frequency (RF) saturation (spillover) owing to the relatively small chemical shift. Recently, it has been found that, for dilute DIACEST agents that undergo slow to intermediate chemical exchange, the spillover effect varies little with the labile proton ratio and exchange rate. Therefore, we postulated that the omega plot analysis can be improved if RF spillover effect could be estimated and taken into account. Specifically, simulation showed that both labile proton ratio and exchange rate derived using the spillover effect-corrected omega plot were in good agreement with simulated values. In addition, the modified omega plot was confirmed experimentally, and we showed that the derived labile proton ratio increased linearly with creatine concentration (p < 0.01), with little difference in their exchange rate (p = 0.32). In summary, our study extends the conventional omega plot for quantitative analysis of DIACEST MRI. Copyright © 2014 John Wiley & Sons, Ltd.
Cohen, Trevor; Blatter, Brett; Patel, Vimla
2005-01-01
Certain applications require computer systems to approximate intended human meaning. This is achievable in constrained domains with a finite number of concepts. Areas such as psychiatry, however, draw on concepts from the world-at-large. A knowledge structure with broad scope is required to comprehend such domains. Latent Semantic Analysis (LSA) is an unsupervised corpus-based statistical method that derives quantitative estimates of the similarity between words and documents from their contextual usage statistics. The aim of this research was to evaluate the ability of LSA to derive meaningful associations between concepts relevant to the assessment of dangerousness in psychiatry. An expert reference model of dangerousness was used to guide the construction of a relevant corpus. Derived associations between words in the corpus were evaluated qualitatively. A similarity-based scoring function was used to assign dangerousness categories to discharge summaries. LSA was shown to derive intuitive relationships between concepts and correlated significantly better than random with human categorization of psychiatric discharge summaries according to dangerousness. The use of LSA to derive a simulated knowledge structure can extend the scope of computer systems beyond the boundaries of constrained conceptual domains. PMID:16779020
A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades
Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd
2017-01-01
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter. PMID:28813566
Joy, Abraham; Anim-Danso, Emmanuel; Kohn, Joachim
2009-01-01
Methods for the detection and estimation of diphosgene and triphosgene are described. These compounds are widely used phosgene precursors which produce an intensely colored purple pentamethine oxonol dye when reacted with 1,3-dimethylbarbituric acid (DBA) and pyridine (or a pyridine derivative). Two quantitative methods are described, based on either UV absorbance or fluorescence of the oxonol dye. Detection limits are ~ 4 µmol/L by UV and <0.4 µmol/L by fluorescence. The third method is a test strip for the simple and rapid detection and semi-quantitative estimation of diphosgene and triphosgene, using a filter paper embedded with dimethylbarbituric acid and poly(4-vinylpyridine). Addition of a test solution to the paper causes a color change from white to light blue at low concentrations and to pink at higher concentrations of triphosgene. The test strip is useful for quick on-site detection of triphosgene and diphosgene in reaction mixtures. The test strip is easy to perform and provides clear signal readouts indicative of the presence of phosgene precursors. The utility of this method was demonstrated by the qualitative determination of residual triphosgene during the production of poly(Bisphenol A carbonate). PMID:19782219
Cerebral capillary velocimetry based on temporal OCT speckle contrast.
Choi, Woo June; Li, Yuandong; Qin, Wan; Wang, Ruikang K
2016-12-01
We propose a new optical coherence tomography (OCT) based method to measure red blood cell (RBC) velocities of single capillaries in the cortex of rodent brain. This OCT capillary velocimetry exploits quantitative laser speckle contrast analysis to estimate speckle decorrelation rate from the measured temporal OCT speckle signals, which is related to microcirculatory flow velocity. We hypothesize that OCT signal due to sub-surface capillary flow can be treated as the speckle signal in the single scattering regime and thus its time scale of speckle fluctuations can be subjected to single scattering laser speckle contrast analysis to derive characteristic decorrelation time. To validate this hypothesis, OCT measurements are conducted on a single capillary flow phantom operating at preset velocities, in which M-mode B-frames are acquired using a high-speed OCT system. Analysis is then performed on the time-varying OCT signals extracted at the capillary flow, exhibiting a typical inverse relationship between the estimated decorrelation time and absolute RBC velocity, which is then used to deduce the capillary velocities. We apply the method to in vivo measurements of mouse brain, demonstrating that the proposed approach provides additional useful information in the quantitative assessment of capillary hemodynamics, complementary to that of OCT angiography.
NASA Astrophysics Data System (ADS)
Gou, Yabin; Ma, Yingzhao; Chen, Haonan; Wen, Yixin
2018-05-01
Quantitative precipitation estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex spatial and temporal variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3264 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profile of reflectivity (VPR) clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method for all precipitation events in terms of score comparison using validation gauge measurements as references. It is also found that the SCIT-based approach can effectively mitigate the local error of radar QPE and represent the precipitation spatiotemporal variability better than the RT-based scheme.
A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades.
Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd
2017-08-01
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong
2016-05-01
With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.
Bayesian assessment of overtriage and undertriage at a level I trauma centre.
DiDomenico, Paul B; Pietzsch, Jan B; Paté-Cornell, M Elisabeth
2008-07-13
We analysed the trauma triage system at a specific level I trauma centre to assess rates of over- and undertriage and to support recommendations for system improvements. The triage process is designed to estimate the severity of patient injury and allocate resources accordingly, with potential errors of overestimation (overtriage) consuming excess resources and underestimation (undertriage) potentially leading to medical errors.We first modelled the overall trauma system using risk analysis methods to understand interdependencies among the actions of the participants. We interviewed six experienced trauma surgeons to obtain their expert opinion of the over- and undertriage rates occurring in the trauma centre. We then assessed actual over- and undertriage rates in a random sample of 86 trauma cases collected over a six-week period at the same centre. We employed Bayesian analysis to quantitatively combine the data with the prior probabilities derived from expert opinion in order to obtain posterior distributions. The results were estimates of overtriage and undertriage in 16.1 and 4.9% of patients, respectively. This Bayesian approach, which provides a quantitative assessment of the error rates using both case data and expert opinion, provides a rational means of obtaining a best estimate of the system's performance. The overall approach that we describe in this paper can be employed more widely to analyse complex health care delivery systems, with the objective of reduced errors, patient risk and excess costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, T.; Ungers, L.; Briggs, T.
1980-08-01
The purpose of this study is to estimate both quantitatively and qualitatively, the worker and societal risks attributable to four photovoltaic cell (solar cell) production processes. Quantitative risk values were determined by use of statistics from the California semiconductor industry. The qualitative risk assessment was performed using a variety of both governmental and private sources of data. The occupational health statistics derived from the semiconductor industry were used to predict injury and fatality levels associated with photovoltaic cell manufacturing. The use of these statistics to characterize the two silicon processes described herein is defensible from the standpoint that many ofmore » the same process steps and materials are used in both the semiconductor and photovoltaic industries. These health statistics are less applicable to the gallium arsenide and cadmium sulfide manufacturing processes, primarily because of differences in the materials utilized. Although such differences tend to discourage any absolute comparisons among the four photovoltaic cell production processes, certain relative comparisons are warranted. To facilitate a risk comparison of the four processes, the number and severity of process-related chemical hazards were assessed. This qualitative hazard assessment addresses both the relative toxicity and the exposure potential of substances in the workplace. In addition to the worker-related hazards, estimates of process-related emissions and wastes are also provided.« less
NASA Astrophysics Data System (ADS)
Streubel, D. P.; Kodama, K.
2014-12-01
To provide continuous flash flood situational awareness and to better differentiate severity of ongoing individual precipitation events, the National Weather Service Research Distributed Hydrologic Model (RDHM) is being implemented over Hawaii and Alaska. In the implementation process of RDHM, three gridded precipitation analyses are used as forcing. The first analysis is a radar only precipitation estimate derived from WSR-88D digital hybrid reflectivity, a Z-R relationship and aggregated into an hourly ¼ HRAP grid. The second analysis is derived from a rain gauge network and interpolated into an hourly ¼ HRAP grid using PRISM climatology. The third analysis is derived from a rain gauge network where rain gauges are assigned static pre-determined weights to derive a uniform mean areal precipitation that is applied over a catchment on a ¼ HRAP grid. To assess the effect of different QPE analyses on the accuracy of RDHM simulations and to potentially identify a preferred analysis for operational use, each QPE was used to force RDHM to simulate stream flow for 20 USGS peak flow events. An evaluation of the RDHM simulations was focused on peak flow magnitude, peak flow timing, and event volume accuracy to be most relevant for operational use. Results showed RDHM simulations based on the observed rain gauge amounts were more accurate in simulating peak flow magnitude and event volume relative to the radar derived analysis. However this result was not consistent for all 20 events nor was it consistent for a few of the rainfall events where an annual peak flow was recorded at more than one USGS gage. Implications of this indicate that a more robust QPE forcing with the inclusion of uncertainty derived from the three analyses may provide a better input for simulating extreme peak flow events.
Comber, Mike H I; Walker, John D; Watts, Chris; Hermens, Joop
2003-08-01
The use of quantitative structure-activity relationships (QSARs) for deriving the predicted no-effect concentration of discrete organic chemicals for the purposes of conducting a regulatory risk assessment in Europe and the United States is described. In the United States, under the Toxic Substances Control Act (TSCA), the TSCA Interagency Testing Committee and the U.S. Environmental Protection Agency (U.S. EPA) use SARs to estimate the hazards of existing and new chemicals. Within the Existing Substances Regulation in Europe, QSARs may be used for data evaluation, test strategy indications, and the identification and filling of data gaps. To illustrate where and when QSARs may be useful and when their use is more problematic, an example, methyl tertiary-butyl ether (MTBE), is given and the predicted and experimental data are compared. Improvements needed for new QSARs and tools for developing and using QSARs are discussed.
Quantitative structure-toxicity relationship (QSTR) studies on the organophosphate insecticides.
Can, Alper
2014-11-04
Organophosphate insecticides are the most commonly used pesticides in the world. In this study, quantitative structure-toxicity relationship (QSTR) models were derived for estimating the acute oral toxicity of organophosphate insecticides to male rats. The 20 chemicals of the training set and the seven compounds of the external testing set were described by means of using descriptors. Descriptors for lipophilicity, polarity and molecular geometry, as well as quantum chemical descriptors for energy were calculated. Model development to predict toxicity of organophosphate insecticides in different matrices was carried out using multiple linear regression. The model was validated internally and externally. In the present study, QSTR model was used for the first time to understand the inherent relationships between the organophosphate insecticide molecules and their toxicity behavior. Such studies provide mechanistic insight about structure-toxicity relationship and help in the design of less toxic insecticides. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-01-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight. PMID:12019254
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-05-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight.
Estimating sub-surface dispersed oil concentration using acoustic backscatter response.
Fuller, Christopher B; Bonner, James S; Islam, Mohammad S; Page, Cheryl; Ojo, Temitope; Kirkey, William
2013-05-15
The recent Deepwater Horizon disaster resulted in a dispersed oil plume at an approximate depth of 1000 m. Several methods were used to characterize this plume with respect to concentration and spatial extent including surface supported sampling and autonomous underwater vehicles with in situ instrument payloads. Additionally, echo sounders were used to track the plume location, demonstrating the potential for remote detection using acoustic backscatter (ABS). This study evaluated use of an Acoustic Doppler Current Profiler (ADCP) to quantitatively detect oil-droplet suspensions from the ABS response in a controlled laboratory setting. Results from this study showed log-linear ABS responses to oil-droplet volume concentration. However, the inability to reproduce ABS response factors suggests the difficultly in developing meaningful calibration factors for quantitative field analysis. Evaluation of theoretical ABS intensity derived from the particle size distribution provided insight regarding method sensitivity in the presence of interfering ambient particles. Copyright © 2013 Elsevier Ltd. All rights reserved.
Quantification of local and global benefits from air pollution control in Mexico City.
Mckinley, Galen; Zuk, Miriam; Höjer, Morten; Avalos, Montserrat; González, Isabel; Iniestra, Rodolfo; Laguna, Israel; Martínez, Miguel A; Osnaya, Patricia; Reynales, Luz M; Valdés, Raydel; Martínez, Julia
2005-04-01
Complex sociopolitical, economic, and geographical realities cause the 20 million residents of Mexico City to suffer from some of the worst air pollution conditions in the world. Greenhouse gas emissions from the city are also substantial, and opportunities for joint local-global air pollution control are being sought. Although a plethora of measures to improve local air quality and reduce greenhouse gas emissions have been proposed for Mexico City, resources are not available for implementation of all proposed controls and thus prioritization must occur. Yet policy makers often do not conduct comprehensive quantitative analyses to inform these decisions. We reanalyze a subset of currently proposed control measures, and derive cost and health benefit estimates that are directly comparable. This study illustrates that improved quantitative analysis can change implementation prioritization for air pollution and greenhouse gas control measures in Mexico City.
NASA Astrophysics Data System (ADS)
Goldar, A.; Arneodo, A.; Audit, B.; Argoul, F.; Rappailles, A.; Guilbaud, G.; Petryk, N.; Kahli, M.; Hyrien, O.
2016-03-01
We propose a non-local model of DNA replication that takes into account the observed uncertainty on the position and time of replication initiation in eukaryote cell populations. By picturing replication initiation as a two-state system and considering all possible transition configurations, and by taking into account the chromatin’s fractal dimension, we derive an analytical expression for the rate of replication initiation. This model predicts with no free parameter the temporal profiles of initiation rate, replication fork density and fraction of replicated DNA, in quantitative agreement with corresponding experimental data from both S. cerevisiae and human cells and provides a quantitative estimate of initiation site redundancy. This study shows that, to a large extent, the program that regulates the dynamics of eukaryotic DNA replication is a collective phenomenon that emerges from the stochastic nature of replication origins initiation.
NASA Astrophysics Data System (ADS)
Sun, Qiming; Melnikov, Alexander; Wang, Jing; Mandelis, Andreas
2018-04-01
A rigorous treatment of the nonlinear behavior of photocarrier radiometric (PCR) signals is presented theoretically and experimentally for the quantitative characterization of semiconductor photocarrier recombination and transport properties. A frequency-domain model based on the carrier rate equation and the classical carrier radiative recombination theory was developed. The derived concise expression reveals different functionalities of the PCR amplitude and phase channels: the phase bears direct quantitative correlation with the carrier effective lifetime, while the amplitude versus the estimated photocarrier density dependence can be used to extract the equilibrium majority carrier density and thus, resistivity. An experimental ‘ripple’ optical excitation mode (small modulation depth compared to the dc level) was introduced to bypass the complicated ‘modulated lifetime’ problem so as to simplify theoretical interpretation and guarantee measurement self-consistency and reliability. Two Si wafers with known resistivity values were tested to validate the method.
Oxford Optronix MPM 3S: a clinical assessment of a microvascular perfusion monitor.
Dryden, C M; Gray, W M; Asbury, A J
1992-01-01
The Oxford Optronix MPM 3S is a new microvascular perfusion monitor which is promoted as a device for use in the operating theatre. It uses a semiconductor laser diode and applies the Doppler principle to derive a semi-quantitative estimation of microvascular flow. We assessed this instrument with eight healthy volunteers who each performed eight different orthostatic arm manoeuvres while forearm skin blood flow was monitored. The different manoeuvres caused statistically significant changes in the instrument's reading which generally were consistent with expected changes in blood flow. The monitor also was assessed in the theatre environment with four anaesthetized patients. It proved easy to use, and was not subject to electrical interference from other equipment including short-wave diathermy. The major practical limitation of the technique is the semi-quantitative nature of the measurement. The instrument appears to have potential clinical uses in plastic and vascular surgery.
Soil sail content estimation in the yellow river delta with satellite hyperspectral data
Weng, Yongling; Gong, Peng; Zhu, Zhi-Liang
2008-01-01
Soil salinization is one of the most common land degradation processes and is a severe environmental hazard. The primary objective of this study is to investigate the potential of predicting salt content in soils with hyperspectral data acquired with EO-1 Hyperion. Both partial least-squares regression (PLSR) and conventional multiple linear regression (MLR), such as stepwise regression (SWR), were tested as the prediction model. PLSR is commonly used to overcome the problem caused by high-dimensional and correlated predictors. Chemical analysis of 95 samples collected from the top layer of soils in the Yellow River delta area shows that salt content was high on average, and the dominant chemicals in the saline soil were NaCl and MgCl2. Multivariate models were established between soil contents and hyperspectral data. Our results indicate that the PLSR technique with laboratory spectral data has a strong prediction capacity. Spectral bands at 1487-1527, 1971-1991, 2032-2092, and 2163-2355 nm possessed large absolute values of regression coefficients, with the largest coefficient at 2203 nm. We obtained a root mean squared error (RMSE) for calibration (with 61 samples) of RMSEC = 0.753 (R2 = 0.893) and a root mean squared error for validation (with 30 samples) of RMSEV = 0.574. The prediction model was applied on a pixel-by-pixel basis to a Hyperion reflectance image to yield a quantitative surface distribution map of soil salt content. The result was validated successfully from 38 sampling points. We obtained an RMSE estimate of 1.037 (R2 = 0.784) for the soil salt content map derived by the PLSR model. The salinity map derived from the SWR model shows that the predicted value is higher than the true value. These results demonstrate that the PLSR method is a more suitable technique than stepwise regression for quantitative estimation of soil salt content in a large area. ?? 2008 CASI.
Adherence and drug resistance: predictions for therapy outcome.
Wahl, L M; Nowak, M A
2000-01-01
We combine standard pharmacokinetics with an established model of viral replication to predict the outcome of therapy as a function of adherence to the drug regimen. We consider two types of treatment failure: failure to eliminate the wild-type virus, and the emergence of drug-resistant virus. Specifically, we determine the conditions under which resistance dominates as a result of imperfect adherence. We derive this result for both single- and triple-drug therapies, with attention to conditions which favour the emergence of viral strains that are resistant to one or more drugs in a cocktail. Our analysis provides quantitative estimates of the degree of adherence necessary to prevent resistance. We derive results specific to the treatment of human immunodeficiency virus infection, but emphasize that our method is applicable to a range of viral or other infections treated by chemotherapy. PMID:10819155
Suwa, Masayori; Nakano, Yusuke; Tsukahara, Satoshi; Watarai, Hitoshi
2013-05-21
We have constructed an experimental setup for Faraday rotation dispersion imaging and demonstrated the performance of a novel imaging principle. By using a pulsed magnetic field and a polarized light synchronized to the magnetic field, quantitative Faraday rotation images of diamagnetic organic liquids in glass capillaries were observed. Nonaromatic hydrocarbons, benzene derivatives, and naphthalene derivatives were clearly distinguished by the Faraday rotation images due to the difference in Verdet constants. From the wavelength dispersion of the Faraday rotation images in the visible region, it was found that the resonance wavelength in the UV region, which was estimated based on the Faraday B-term, could be used as characteristic parameters for the imaging of the liquids. Furthermore, simultaneous acquisition of Faraday rotation image and natural optical rotation image was demonstrated for chiral organic liquids.
NASA Astrophysics Data System (ADS)
McNally, Amy L.
Agricultural drought is characterized by shortages in precipitation, large differences between actual and potential evapotranspiration, and soil water deficits that impact crop growth and pasture productivity. Rainfall and other agrometeorological gauge networks in Sub-Saharan Africa are inadequate for drought early warning systems and hence, satellite-based estimates of rainfall and vegetation greenness provide the main sources of information. While a number of studies have described the empirical relationship between rainfall and vegetation greenness, these studies lack a process based approach that includes soil moisture storage. In Chapters I and II, I modeled soil moisture using satellite rainfall inputs and developed a new method for estimating soil moisture with NDVI calibrated to in situ and microwave soil moisture observations. By transforming both NDVI and rainfall into estimates of soil moisture I was able to easily compare these two datasets in a physically meaningful way. In Chapter II, I also show how the new NDVI derived soil moisture can be assimilated into a water balance model that calculates an index of crop water stress. Compared to the analogous rainfall derived estimates of soil moisture and crop stress the NDVI derived estimates were better correlated with millet yields. In Chapter III, I developed a metric for defining growing season drought events that negatively impact millet yields. This metric is based on the data and models used in the Chapters I and II. I then use this metric to evaluate the ability of a sophisticated land surface model to detect drought events. The analysis showed that this particular land surface model's soil moisture estimates do have the potential to benefit the food security and drought early warning communities. With a focus on soil moisture, this dissertation introduced new methods that utilized a variety of data and models for agricultural drought monitoring applications. These new methods facilitate a more quantitative, transparent `convergence of evidence' approach to identifying agricultural drought events that lead to food insecurity. Ideally, these new methods will contribute to better famine early warning and the timely delivery of food aid to reduce the human suffering caused by drought.
NASA Astrophysics Data System (ADS)
Eldardiry, H. A.; Habib, E. H.
2014-12-01
Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.
Temporal lobe epilepsy: quantitative MR volumetry in detection of hippocampal atrophy.
Farid, Nikdokht; Girard, Holly M; Kemmotsu, Nobuko; Smith, Michael E; Magda, Sebastian W; Lim, Wei Y; Lee, Roland R; McDonald, Carrie R
2012-08-01
To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration-cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Quantitative MR imaging-derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%-89.5%) and specificity (92.2%-94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice.
NASA Astrophysics Data System (ADS)
Zhang, A.; Chen, S.; Fan, S.; Min, C.
2017-12-01
Precipitation is one of the basic elements of regional and global climate change. Not only does the precipitation have a great impact on the earth's hydrosphere, but also plays a crucial role in the global energy balance. S-band ground-based dual-polarization radar has the excellent performance of identifying the different phase states of precipitation, which can dramatically improve the accuracy of hail identification and quantitative precipitation estimation (QPE). However, the ground-based radar cannot measure the precipitation in mountains, sparsely populated plateau, desert and ocean because of the ground-based radar void. The Unites States National Aeronautics and Space Administration (NASA) and Japan Aerospace Exploration Agency (JAXA) have launched the Global Precipitation Measurement (GPM) for almost three years. GPM is equipped with a GPM Microwave Imager (GMI) and a Dual-frequency (Ku- and Ka-band) Precipitation Radar (DPR) that covers the globe between 65°S and 65°N. The main parameters and the detection method of DPR are different from those of ground-based radars, thus, the DPR's reliability and capability need to be investigated and evaluated by the ground-based radar. This study compares precipitation derived from the ground-based radar measurement to that derived from the DPR's observations. The ground-based radar is a S-band dual-polarization radar deployed near an airport in the west of Zhuhai city. The ground-based quantitative precipitation estimates are with a high resolution of 1km×1km×6min. It shows that this radar covers the whole Pearl River Delta of China, including Hong Kong and Macao. In order to quantify the DPR precipitation quantification capabilities relative to the S-band radar, statistical metrics used in this study are as follows: the difference (Dif) between DPR and the S-band radar observation, root-mean-squared error (RMSE) and correlation coefficient (CC). Additionally, Probability of Detection (POD) and False Alarm Ratio (FAR) are used to further evaluate the rainfall capacity of the DPR. The comparisons performed between the DPR and the S-band radar are expected to provide a useful reference not only for algorithm developers but also the end users in hydrology, ecology, weather forecast service and so on.
Chieng, Norman; Mizuno, Masayasu; Pikal, Michael
2013-10-01
The purposes of this study are to characterize the relaxation dynamics in complex freeze dried formulations and to investigate the quantitative relationship between the structural relaxation time as measured by thermal activity monitor (TAM) and that estimated from the width of the glass transition temperature (ΔT(g)). The latter method has advantages over TAM because it is simple and quick. As part of this objective, we evaluate the accuracy in estimating relaxation time data at higher temperatures (50 °C and 60 °C) from TAM data at lower temperature (40 °C) and glass transition region width (ΔT(g)) data obtained by differential scanning calorimetry. Formulations studied here were hydroxyethyl starch (HES)-disaccharide, HES-polyol, and HES-disaccharide-polyol at various ratios. We also re-examine, using TAM derived relaxation times, the correlation between protein stability (human growth hormone, hGH) and relaxation times explored in a previous report, which employed relaxation time data obtained from ΔT(g). Results show that most of the freeze dried formulations exist in single amorphous phase, and structural relaxation times were successfully measured for these systems. We find a reasonably good correlation between TAM measured relaxation times and corresponding data obtained from estimates based on ΔT(g), but the agreement is only qualitative. The comparison plot showed that TAM data are directly proportional to the 1/3 power of ΔT(g) data, after correcting for an offset. Nevertheless, the correlation between hGH stability and relaxation time remained qualitatively the same as found with using ΔT(g) derived relaxation data, and it was found that the modest extrapolation of TAM data to higher temperatures using ΔT(g) method and TAM data at 40 °C resulted in quantitative agreement with TAM measurements made at 50 °C and 60 °C, provided the TAM experiment temperature, is well below the Tg of the sample. Copyright © 2013 Elsevier B.V. All rights reserved.
Estimation of diastolic intraventricular pressure gradients by Doppler M-mode echocardiography
NASA Technical Reports Server (NTRS)
Greenberg, N. L.; Vandervoort, P. M.; Firstenberg, M. S.; Garcia, M. J.; Thomas, J. D.
2001-01-01
Previous studies have shown that small intraventricular pressure gradients (IVPG) are important for efficient filling of the left ventricle (LV) and as a sensitive marker for ischemia. Unfortunately, there has previously been no way of measuring these noninvasively, severely limiting their research and clinical utility. Color Doppler M-mode (CMM) echocardiography provides a spatiotemporal velocity distribution along the inflow tract throughout diastole, which we hypothesized would allow direct estimation of IVPG by using the Euler equation. Digital CMM images, obtained simultaneously with intracardiac pressure waveforms in six dogs, were processed by numerical differentiation for the Euler equation, then integrated to estimate IVPG and the total (left atrial to left ventricular apex) pressure drop. CMM-derived estimates agreed well with invasive measurements (IVPG: y = 0.87x + 0.22, r = 0.96, P < 0.001, standard error of the estimate = 0.35 mmHg). Quantitative processing of CMM data allows accurate estimation of IVPG and tracking of changes induced by beta-adrenergic stimulation. This novel approach provides unique information on LV filling dynamics in an entirely noninvasive way that has previously not been available for assessment of diastolic filling and function.
NASA Astrophysics Data System (ADS)
Izett, Jonathan G.; Fennel, Katja
2018-02-01
Rivers deliver large amounts of terrestrially derived materials (such as nutrients, sediments, and pollutants) to the coastal ocean, but a global quantification of the fate of this delivery is lacking. Nutrients can accumulate on shelves, potentially driving high levels of primary production with negative consequences like hypoxia, or be exported across the shelf to the open ocean where impacts are minimized. Global biogeochemical models cannot resolve the relatively small-scale processes governing river plume dynamics and cross-shelf export; instead, river inputs are often parameterized assuming an "all or nothing" approach. Recently, Sharples et al. (2017), https://doi.org/10.1002/2016GB005483 proposed the SP number—a dimensionless number relating the estimated size of a plume as a function of latitude to the local shelf width—as a simple estimator of cross-shelf export. We extend their work, which is solely based on theoretical and empirical scaling arguments, and address some of its limitations using a numerical model of an idealized river plume. In a large number of simulations, we test whether the SP number can accurately describe export in unforced cases and with tidal and wind forcings imposed. Our numerical experiments confirm that the SP number can be used to estimate export and enable refinement of the quantitative relationships proposed by Sharples et al. We show that, in general, external forcing has only a weak influence compared to latitude and derive empirical relationships from the results of the numerical experiments that can be used to estimate riverine freshwater export to the open ocean.
Integrating animal movement with habitat suitability for estimating dynamic landscape connectivity
van Toor, Mariëlle L.; Kranstauber, Bart; Newman, Scott H.; Prosser, Diann J.; Takekawa, John Y.; Technitis, Georgios; Weibel, Robert; Wikelski, Martin; Safi, Kamran
2018-01-01
Context High-resolution animal movement data are becoming increasingly available, yet having a multitude of empirical trajectories alone does not allow us to easily predict animal movement. To answer ecological and evolutionary questions at a population level, quantitative estimates of a species’ potential to link patches or populations are of importance. Objectives We introduce an approach that combines movement-informed simulated trajectories with an environment-informed estimate of the trajectories’ plausibility to derive connectivity. Using the example of bar-headed geese we estimated migratory connectivity at a landscape level throughout the annual cycle in their native range. Methods We used tracking data of bar-headed geese to develop a multi-state movement model and to estimate temporally explicit habitat suitability within the species’ range. We simulated migratory movements between range fragments, and calculated a measure we called route viability. The results are compared to expectations derived from published literature. Results Simulated migrations matched empirical trajectories in key characteristics such as stopover duration. The viability of the simulated trajectories was similar to that of the empirical trajectories. We found that, overall, the migratory connectivity was higher within the breeding than in wintering areas, corroborating previous findings for this species. Conclusions We show how empirical tracking data and environmental information can be fused for meaningful predictions of animal movements throughout the year and even outside the spatial range of the available data. Beyond predicting migratory connectivity, our framework will prove useful for modelling ecological processes facilitated by animal movement, such as seed dispersal or disease ecology.
Georgi, Laura; Johnson-Cicalese, Jennifer; Honig, Josh; Das, Sushma Parankush; Rajah, Veeran D; Bhattacharya, Debashish; Bassil, Nahla; Rowland, Lisa J; Polashock, James; Vorsa, Nicholi
2013-03-01
The first genetic map of cranberry (Vaccinium macrocarpon) has been constructed, comprising 14 linkage groups totaling 879.9 cM with an estimated coverage of 82.2 %. This map, based on four mapping populations segregating for field fruit-rot resistance, contains 136 distinct loci. Mapped markers include blueberry-derived simple sequence repeat (SSR) and cranberry-derived sequence-characterized amplified region markers previously used for fingerprinting cranberry cultivars. In addition, SSR markers were developed near cranberry sequences resembling genes involved in flavonoid biosynthesis or defense against necrotrophic pathogens, or conserved orthologous set (COS) sequences. The cranberry SSRs were developed from next-generation cranberry genomic sequence assemblies; thus, the positions of these SSRs on the genomic map provide information about the genomic location of the sequence scaffold from which they were derived. The use of SSR markers near COS and other functional sequences, plus 33 SSR markers from blueberry, facilitates comparisons of this map with maps of other plant species. Regions of the cranberry map were identified that showed conservation of synteny with Vitis vinifera and Arabidopsis thaliana. Positioned on this map are quantitative trait loci (QTL) for field fruit-rot resistance (FFRR), fruit weight, titratable acidity, and sound fruit yield (SFY). The SFY QTL is adjacent to one of the fruit weight QTL and may reflect pleiotropy. Two of the FFRR QTL are in regions of conserved synteny with grape and span defense gene markers, and the third FFRR QTL spans a flavonoid biosynthetic gene.
Bayesian uncertainty quantification in linear models for diffusion MRI.
Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans
2018-03-29
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.
Prakash, Jai; Gabdulina, Gulzhan; Trofimov, Svetlana; Livshits, Gregory
2017-09-01
One of the potential molecular biomarkers of osteoarthritis (OA) is hyaluronic acid (HA). HA levels may be related to the severity and progression of OA. However, little is known about the contribution of major risk factors for osteoarthritis, e.g. obesity-related phenotypes and genetics to HA variation. To clarify the quantitative effect of these factors on HA. An ethnically homogeneous sample of 911 apparently healthy European-derived individuals, assessed for radiographic hand osteoarthritis (RHOA), HA, leptin, adiponectin, and several anthropometrical measures of obesity-related phenotypes was studied. Model-based quantitative genetic analysis was used to reveal genetic and shared environmental factors affecting the variation of the study's phenotypes. The HA levels significantly correlated with the age, RHOA, adiponectin, obesity-related phenotypes, and the waist-to-hip ratio. The putative genetic effects contributed significantly to the variation of HA (66.2 ± 9.3%) and they were also significant factors in the variations of all the other studied phenotypes, with the heritability estimate ranging between 0.122 ± 4.4% (WHR) and 45.7 ± 2.2% (joint space narrowing). This is the first study to report heritability estimates of HA variation and its correlation with obesity-related phenotypes, ADP and RHOA. However, the nature of genetic effects on HA and its correlation with other study phenotypes require further clarification.
NASA Astrophysics Data System (ADS)
Rawling, Geoffrey C.; Newton, B. Talon
2016-06-01
The Sacramento Mountains and the adjacent Roswell Artesian Basin, in south-central New Mexico (USA), comprise a regional hydrologic system, wherein recharge in the mountains ultimately supplies water to the confined basin aquifer. Geologic, hydrologic, geochemical, and climatologic data were used to delineate the area of recharge in the southern Sacramento Mountains. The water-table fluctuation and chloride mass-balance methods were used to quantify recharge over a range of spatial and temporal scales. Extrapolation of the quantitative recharge estimates to the entire Sacramento Mountains region allowed comparison with previous recharge estimates for the northern Sacramento Mountains and the Roswell Artesian Basin. Recharge in the Sacramento Mountains is estimated to range from 159.86 × 106 to 209.42 × 106 m3/year. Both the location of recharge and range in estimates is consistent with previous work that suggests that ~75 % of the recharge to the confined aquifer in the Roswell Artesian Basin has moved downgradient through the Yeso Formation from distal recharge areas in the Sacramento Mountains. A smaller recharge component is derived from infiltration of streamflow beneath the major drainages that cross the Pecos Slope, but in the southern Sacramento Mountains much of this water is ultimately derived from spring discharge. Direct recharge across the Pecos Slope between the mountains and the confined basin aquifer is much smaller than either of the other two components.
Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J
2015-06-01
One possible contributor to the reported rise in the ratio of adenocarcinoma to squamous cell carcinoma of the lung may be differences in the pattern of decline in risk following quitting for the two lung cancer types. Earlier, using data from 85 studies comparing overall lung cancer risks in current smokers, quitters (by time quit) and never smokers, we fitted the negative exponential model, deriving an estimate of 9.93years for the half-life - the time when the excess risk for quitters compared to never smokers becomes half that for continuing smokers. Here we applied the same techniques to data from 16 studies providing RRs specific for lung cancer type. From the 13 studies where the half-life was estimable for each type, we derived estimates of 11.68 (95% CI 10.22-13.34) for squamous cell carcinoma and 14.45 (11.92-17.52) for adenocarcinoma. The ratio of the half-lives was estimated as 1.32 (95% CI 1.20-1.46, p<0.001). The slower decline in quitters for adenocarcinoma, evident in subgroups by sex, age and other factors, may be one of the factors contributing to the reported rise in the ratio of adenocarcinoma to squamous cell carcinoma. Others include changes in the diagnosis and classification of lung cancer. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Zhou, Yuyu; Weng, Qihao; Gurney, Kevin R.; Shuai, Yanmin; Hu, Xuefei
2012-01-01
This paper examined the relationship between remotely sensed anthropogenic heat discharge and energy use from residential and commercial buildings across multiple scales in the city of Indianapolis, Indiana, USA. The anthropogenic heat discharge was estimated with a remote sensing-based surface energy balance model, which was parameterized using land cover, land surface temperature, albedo, and meteorological data. The building energy use was estimated using a GIS-based building energy simulation model in conjunction with Department of Energy/Energy Information Administration survey data, the Assessor's parcel data, GIS floor areas data, and remote sensing-derived building height data. The spatial patterns of anthropogenic heat discharge and energy use from residential and commercial buildings were analyzed and compared. Quantitative relationships were evaluated across multiple scales from pixel aggregation to census block. The results indicate that anthropogenic heat discharge is consistent with building energy use in terms of the spatial pattern, and that building energy use accounts for a significant fraction of anthropogenic heat discharge. The research also implies that the relationship between anthropogenic heat discharge and building energy use is scale-dependent. The simultaneous estimation of anthropogenic heat discharge and building energy use via two independent methods improves the understanding of the surface energy balance in an urban landscape. The anthropogenic heat discharge derived from remote sensing and meteorological data may be able to serve as a spatial distribution proxy for spatially-resolved building energy use, and even for fossil-fuel CO2 emissions if additional factors are considered.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantification of hookworm ova from wastewater matrices using quantitative PCR.
Gyawali, Pradip; Ahmed, Warish; Sidhu, Jatinder P; Jagals, Paul; Toze, Simon
2017-07-01
A quantitative PCR (qPCR) assay was used to quantify Ancylostoma caninum ova in wastewater and sludge samples. We estimated the average gene copy numbers for a single ovum using a mixed population of ova. The average gene copy numbers derived from the mixed population were used to estimate numbers of hookworm ova in A. caninum seeded and unseeded wastewater and sludge samples. The newly developed qPCR assay estimated an average of 3.7×10 3 gene copies per ovum, which was then validated by seeding known numbers of hookworm ova into treated wastewater. The qPCR estimated an average of (1.1±0.1), (8.6±2.9) and (67.3±10.4) ova for treated wastewater that was seeded with (1±0), (10±2) and (100±21) ova, respectively. The further application of the qPCR assay for the quantification of A. caninum ova was determined by seeding a known numbers of ova into the wastewater matrices. The qPCR results indicated that 50%, 90% and 67% of treated wastewater (1L), raw wastewater (1L) and sludge (~4g) samples had variable numbers of A. caninum gene copies. After conversion of the qPCR estimated gene copy numbers to ova for treated wastewater, raw wastewater, and sludge samples, had an average of 0.02, 1.24 and 67 ova, respectively. The result of this study indicated that qPCR can be used for the quantification of hookworm ova from wastewater and sludge samples; however, caution is advised in interpreting qPCR generated data for health risk assessment. Copyright © 2017. Published by Elsevier B.V.
Din, Mairaj; Zheng, Wen; Rashid, Muhammad; Wang, Shanqin; Shi, Zhihua
2017-01-01
Hyperspectral reflectance derived vegetation indices (VIs) are used for non-destructive leaf area index (LAI) monitoring for precise and efficient N nutrition management. This study tested the hypothesis that there is potential for using various hyperspectral VIs for estimating LAI at different growth stages of rice under varying N rates. Hyperspectral reflectance and crop canopy LAI measurements were carried out over 2 years (2015 and 2016) in Meichuan, Hubei, China. Different N fertilization, 0, 45, 82, 127, 165, 210, 247, and 292 kg ha-1, were applied to generate various scales of VIs and LAI values. Regression models were used to perform quantitative analyses between spectral VIs and LAI measured under different phenological stages. In addition, the coefficient of determination and RMSE were employed to evaluate these models. Among the nine VIs, the ratio vegetation index, normalized difference vegetation index (NDVI), modified soil-adjusted vegetation index (MSAVI), modified triangular vegetation index (MTVI2) and exhibited strong and significant relationships with the LAI estimation at different phenological stages. The enhanced vegetation index performed moderately. However, the green normalized vegetation index and blue normalized vegetation index confirmed that there is potential for crop LAI estimation at early phenological stages; the soil-adjusted vegetation index and optimized soil-adjusted vegetation index were more related to the soil optical properties, which were predicted to be the least accurate for LAI estimation. The noise equivalent accounted for the sensitivity of the VIs and MSAVI, MTVI2, and NDVI for the LAI estimation at phenological stages. The results note that LAI at different crop phenological stages has a significant influence on the potential of hyperspectral derived VIs under different N management practices. PMID:28588596
Aoe, Jo; Watabe, Tadashi; Shimosegawa, Eku; Kato, Hiroki; Kanai, Yasukazu; Naka, Sadahiro; Matsunaga, Keiko; Isohashi, Kayako; Tatsumi, Mitsuaki; Hatazawa, Jun
2018-06-22
Resting-state functional MRI (rs-fMRI) has revealed the existence of a default-mode network (DMN) based on spontaneous oscillations of the blood oxygenation level-dependent (BOLD) signal. The BOLD signal reflects the deoxyhemoglobin concentration, which depends on the relationship between the regional cerebral blood flow (CBF) and the cerebral metabolic rate of oxygen (CMRO 2 ). However, these two factors cannot be separated in BOLD rs-fMRI. In this study, we attempted to estimate the functional correlations in the DMN by means of quantitative 15 O-labeled gases and water PET, and to compare the contribution of the CBF and CMRO 2 to the DMN. Nine healthy volunteers (5 men and 4 women; mean age, 47.0 ± 1.2 years) were studied by means of 15 O-O 2 , 15 O-CO gases and 15 O-water PET. Quantitative CBF and CMRO 2 images were generated by an autoradiographic method and transformed into MNI standardized brain template. Regions of interest were placed on normalized PET images according to the previous rs-fMRI study. For the functional correlation analysis, the intersubject Pearson's correlation coefficients (r) were calculated for all pairs in the brain regions and correlation matrices were obtained for CBF and CMRO 2 , respectively. We defined r > 0.7 as a significant positive correlation and compared the correlation matrices of CBF and CMRO 2 . Significant positive correlations (r > 0.7) were observed in 24 pairs of brain regions for the CBF and 22 pairs of brain regions for the CMRO 2 . Among them, 12 overlapping networks were observed between CBF and CMRO 2 . Correlation analysis of CBF led to the detection of more brain networks as compared to that of CMRO 2 , indicating that the CBF can capture the state of the spontaneous activity with a higher sensitivity. We estimated the functional correlations in the DMN by means of quantitative PET using 15 O-labeled gases and water. The correlation matrix derived from the CBF revealed a larger number of brain networks as compared to that derived from the CMRO 2 , indicating that contribution to the functional correlation in the DMN is higher in the blood flow more than the oxygen consumption.
MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415
The Mapping Model: A Cognitive Theory of Quantitative Estimation
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2008-01-01
How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
Zecchin, Chiara; Facchinetti, Andrea; Sparacino, Giovanni; Dalla Man, Chiara; Manohar, Chinmay; Levine, James A; Basu, Ananda; Kudva, Yogish C; Cobelli, Claudio
2013-10-01
In type 1 diabetes mellitus (T1DM), physical activity (PA) lowers the risk of cardiovascular complications but hinders the achievement of optimal glycemic control, transiently boosting insulin action and increasing hypoglycemia risk. Quantitative investigation of relationships between PA-related signals and glucose dynamics, tracked using, for example, continuous glucose monitoring (CGM) sensors, have been barely explored. In the clinic, 20 control and 19 T1DM subjects were studied for 4 consecutive days. They underwent low-intensity PA sessions daily. PA was tracked by the PA monitoring system (PAMS), a system comprising accelerometers and inclinometers. Variations on glucose dynamics were tracked estimating first- and second-order time derivatives of glucose concentration from CGM via Bayesian smoothing. Short-time effects of PA on glucose dynamics were quantified through the partial correlation function in the interval (0, 60 min) after starting PA. Correlation of PA with glucose time derivatives is evident. In T1DM, the negative correlation with the first-order glucose time derivative is maximal (absolute value) after 15 min of PA, whereas the positive correlation is maximal after 40-45 min. The negative correlation between the second-order time derivative and PA is maximal after 5 min, whereas the positive correlation is maximal after 35-40 min. Control subjects provided similar results but with positive and negative correlation peaks anticipated of 5 min. Quantitative information on correlation between mild PA and short-term glucose dynamics was obtained. This represents a preliminary important step toward incorporation of PA information in more realistic physiological models of the glucose-insulin system usable in T1DM simulators, in development of closed-loop artificial pancreas control algorithms, and in CGM-based prediction algorithms for generation of hypoglycemic alerts.
Li, Dong-tao; Ling, Chang-quan; Zhu, De-zeng
2007-07-01
To establish a quantitative model for evaluating the degree of the TCM basic syndromes often encountered in patients with primary liver cancer (PLC). Medical literatures concerning the clinical investigation and TCM syndrome of PLC were collected and analyzed adopting expert-composed symposium method, and the 100 millimeter scaling was applied in combining with scoring on degree of symptoms to establish a quantitative criterion for symptoms and signs degree classification in patients with PLC. Two models, i.e. the additive model and the additive-multiplicative model, were established by using comprehensive analytic hierarchy process (AHP) as the mathematical tool to estimate the weight of the criterion for evaluating basic syndromes in various layers by specialists. Then the two models were verified in clinical practice and the outcomes were compared with that fuzzy evaluated by specialists. Verification on 459 times/case of PLC showed that the coincidence rate between the outcomes derived from specialists with that from the additive model was 84.53 %, and with that from the additive-multificative model was 62.75 %, the difference between the two showed statistical significance (P<0.01). It could be decided that the additive model is the principle model suitable for quantitative evaluation on the degree of TCM basic syndromes in patients with PLC.
Modeling the Residual Strength of a Fibrous Composite Using the Residual Daniels Function
NASA Astrophysics Data System (ADS)
Paramonov, Yu.; Cimanis, V.; Varickis, S.; Kleinhofs, M.
2016-09-01
The concept of a residual Daniels function (RDF) is introduced. Together with the concept of Daniels sequence, the RDF is used for estimating the residual (after some preliminary fatigue loading) static strength of a unidirectional fibrous composite (UFC) and its S-N curve on the bases of test data. Usually, the residual strength is analyzed on the basis of a known S-N curve. In our work, an inverse approach is used: the S-N curve is derived from an analysis of the residual strength. This approach gives a good qualitive description of the process of decreasing residual strength and explanes the existence of the fatigue limit. The estimates of parameters of the corresponding regression model can be interpreted as estimates of parameters of the local strength of components of the UFC. In order to approach the quantitative experimental estimates of the fatigue life, some ideas based on the mathematics of the semiMarkovian process are employed. Satisfactory results in processing experimental data on the fatigue life and residual strength of glass/epoxy laminates are obtained.
Kenny, Ray
2018-01-16
The upper carbonate member of the Kaibab Formation in northern Arizona (USA) was subaerially exposed during the end Permian and contains fractured and zoned chert rubble lag deposits typical of karst topography. The karst chert rubble has secondary (authigenic) silica precipitates suitable for estimating continental weathering temperatures during the end Permian karst event. New oxygen and hydrogen isotope ratios of secondary silica precipitates in the residual rubble breccia: (1) yield continental palaeotemperature estimates between 17 and 22 °C; and, (2) indicate that meteoric water played a role in the crystallization history of the secondary silica. The continental palaeotemperatures presented herein are broadly consistent with a global mean temperature estimate of 18.2 °C for the latest Permian derived from published climate system models. Few data sets are presently available that allow even approximate quantitative estimates of regional continental palaeotemperatures. These data provide a basis for better understanding the end Permian palaeoclimate at a seasonally-tropical latitude along the western shoreline of Pangaea.
NASA Astrophysics Data System (ADS)
Adib, Artur B.
In the last two decades or so, a collection of results in nonequilibrium statistical mechanics that departs from the traditional near-equilibrium framework introduced by Lars Onsager in 1931 has been derived, yielding new fundamental insights into far-from-equilibrium processes in general. Apart from offering a more quantitative statement of the second law of thermodynamics, some of these results---typified by the so-called "Jarzynski equality"---have also offered novel means of estimating equilibrium quantities from nonequilibrium processes, such as free energy differences from single-molecule "pulling" experiments. This thesis contributes to such efforts by offering three novel results in nonequilibrium statistical mechanics: (a) The entropic analog of the Jarzynski equality; (b) A methodology for estimating free energies from "clamp-and-release" nonequilibrium processes; and (c) A directly measurable symmetry relation in chemical kinetics similar to (but more general than) chemical detailed balance. These results share in common the feature of remaining valid outside Onsager's near-equilibrium regime, and bear direct applicability in protein folding kinetics as well as in single-molecule free energy estimation.
Further developments in orbit ephemeris derived neutral density
NASA Astrophysics Data System (ADS)
Locke, Travis
There are a number of non-conservative forces acting on a satellite in low Earth orbit. The one which is the most dominant and also contains the most uncertainty is atmospheric drag. Atmospheric drag is directly proportional to atmospheric density, and the existing atmospheric density models do not accurately model the variations in atmospheric density. In this research, precision orbit ephemerides (POE) are used as input measurements in an optimal orbit determination scheme in order to estimate corrections to existing atmospheric density models. These estimated corrections improve the estimates of the drag experienced by a satellite and therefore provide an improvement in orbit determination and prediction as well as a better overall understanding of the Earth's upper atmosphere. The optimal orbit determination scheme used in this work includes using POE data as measurements in a sequential filter/smoother process using the Orbit Determination Tool Kit (ODTK) software. The POE derived density estimates are validated by comparing them with the densities derived from accelerometers on board the Challenging Minisatellite Payload (CHAMP) and the Gravity Recovery and Climate Experiment (GRACE). These accelerometer derived density data sets for both CHAMP and GRACE are available from Sean Bruinsma of the Centre National d'Etudes Spatiales (CNES). The trend in the variation of atmospheric density is compared quantitatively by calculating the cross correlation (CC) between the POE derived density values and the accelerometer derived density values while the magnitudes of the two data sets are compared by calculating the root mean square (RMS) values between the two. There are certain high frequency density variations that are observed in the accelerometer derived density data but not in the POE derived density data or any of the baseline density models. These high frequency density variations are typically small in magnitude compared to the overall day-night variation. However during certain time periods, such as when the satellite is near the terminator, the variations are on the same order of magnitude as the diurnal variations. These variations can also be especially prevalent during geomagnetic storms and near the polar cusps. One of the goals of this work is to see what affect these unmodeled high frequency variations have on orbit propagation. In order to see this effect, the orbits of CHAMP and GRACE are propagated during certain time periods using different sources of density data as input measurements (accelerometer, POE, HASDM, and Jacchia 1971). The resulting orbit propagations are all compared to the propagation using the accelerometer derived density data which is used as truth. The RMS and the maximum difference between the different propagations are analyzed in order to see what effect the unmodeled density variations have on orbit propagation. These results are also binned by solar and geomagnetic activity level. The primary input into the orbit determination scheme used to produce the POE derived density estimates is a precision orbit ephemeris file. This file contains position and velocity in-formation for the satellite based on GPS and SLR measurements. The values contained in these files are estimated values and therefore contain some level of error, typically thought to be around the 5-10 cm level. The other primary focus of this work is to evaluate the effect of adding different levels of noise (0.1 m, 0.5 m, 1 m, 10 m, and 100 m) to this raw ephemeris data file before it is input into the orbit determination scheme. The resulting POE derived density estimates for each level of noise are then compared with the accelerometer derived densities by computing the CC and RMS values between the data sets. These results are also binned by solar and geomagnetic activity level.
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.
Niqueux, Éric; Picault, Jean-Paul; Amelot, Michel; Allée, Chantal; Lamandé, Josiane; Guillemoto, Carole; Pierre, Isabelle; Massin, Pascale; Blot, Guillaume; Briand, François-Xavier; Rose, Nicolas; Jestin, Véronique
2014-01-10
EU annual serosurveillance programs show that domestic duck flocks have the highest seroprevalence of H5 antibodies, demonstrating the circulation of notifiable avian influenza virus (AIV) according to OIE, likely low pathogenic (LP). Therefore, transmission characteristics of LPAIV within these flocks can help to understand virus circulation and possible risk of propagation. This study aimed at estimating transmission parameters of four H5 LPAIV (three field strains from French poultry and decoy ducks, and one clonal reverse-genetics strain derived from one of the former), using a SIR model to analyze data from experimental infections in SPF Muscovy ducks. The design was set up to accommodate rearing on wood shavings with a low density of 1.6 ducks/m(2): 10 inoculated ducks were housed together with 15 contact-exposed ducks. Infection was monitored by RNA detection on oropharyngeal and cloacal swabs using real-time RT-PCR with a cutoff corresponding to 2-7 EID50. Depending on the strain, the basic reproduction number (R0) varied from 5.5 to 42.7, confirming LPAIV could easily be transmitted to susceptible Muscovy ducks. The lowest R0 estimate was obtained for a H5N3 field strain, due to lower values of transmission rate and duration of infectious period, whereas reverse-genetics derived H5N1 strain had the highest R0. Frequency and intensity of clinical signs were also variable between strains, but apparently not associated with longer infectious periods. Further comparisons of quantitative transmission parameters may help to identify relevant viral genetic markers for early detection of potentially more virulent strains during surveillance of LPAIV. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bolen, Steven M.; Chandrasekar, V.
2003-06-01
The Tropical Rainfall Mapping Mission (TRMM) is the first mission dedicated to measuring rainfall from space using radar. The precipitation radar (PR) is one of several instruments aboard the TRMM satellite that is operating in a nearly circular orbit with nominal altitude of 350 km, inclination of 35°, and period of 91.5 min. The PR is a single-frequency Ku-band instrument that is designed to yield information about the vertical storm structure so as to gain insight into the intensity and distribution of rainfall. Attenuation effects on PR measurements, however, can be significant and as high as 10-15 dB. This can seriously impair the accuracy of rain rate retrieval algorithms derived from PR signal returns. Quantitative estimation of PR attenuation is made along the PR beam via ground-based polarimetric observations to validate attenuation correction procedures used by the PR. The reflectivity (Zh) at horizontal polarization and specific differential phase (Kdp) are found along the beam from S-band ground radar measurements, and theoretical modeling is used to determine the expected specific attenuation (k) along the space-Earth path at Ku-band frequency from these measurements. A theoretical k-Kdp relationship is determined for rain when Kdp ≥ 0.5°/km, and a power law relationship, k = a Zhb, is determined for light rain and other types of hydrometers encountered along the path. After alignment and resolution volume matching is made between ground and PR measurements, the two-way path-integrated attenuation (PIA) is calculated along the PR propagation path by integrating the specific attenuation along the path. The PR reflectivity derived after removing the PIA is also compared against ground radar observations.
Pirat, Bahar; Little, Stephen H; Igo, Stephen R; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J; Zoghbi, William A
2009-03-01
The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2pi r(2), and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA x aliasing velocity x time velocity integral of AR/peak AR velocity. Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 +/- 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption.
Biological activity of aldose reductase and lipophilicity of pyrrolyl-acetic acid derivatives
NASA Astrophysics Data System (ADS)
Kumari, A.; Kumari, R.; Kumar, R.; Gupta, M.
2011-12-01
Quantitative Structure-Activity Relationship modeling is a powerful approach for correlating an organic compound to its lipophilicity. In this paper QSAR models are established for estimation of correlation of the lipophilicity of a series of pyrrolyl-acetic acid derivatives, inhibitors of the aldose reductase enzyme, in the n-octanol-water system with biological activity of aldose reductase. Lipophilicity, expressed by the logarithm of n-octnol-water partition coefficient log P and biological activity of aldose reductase inhibitory activity by log it. Result obtained by QSAR modeling of compound series reveal a definite trend in biological activity and a further improvement in quantitative relationships are established if, beside log P, Hammett electronic constant σ and connectivity index chi-3 (3 χ) term included in the regression equation. The tri-parametric model with log P, 3 χ and σ as correlating parameters have been found to be the best which gives a variance of 87% ( R 2 = 0.8743). A compound has been found to be serious outlier and when the same has been excluded the model explains about 94% variance of the data set ( R 2 = 0.9447). The topological index (3 χ) has been found to be a good parameter for modeling the biological activity.
Elliott, Jonathan T.; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason R.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.
2017-01-01
Receptor concentration imaging (RCI) with targeted-untargeted optical dye pairs has enabled in vivo immunohistochemistry analysis in preclinical subcutaneous tumors. Successful application of RCI to fluorescence guided resection (FGR), so that quantitative molecular imaging of tumor-specific receptors could be performed in situ, would have a high impact. However, assumptions of pharmacokinetics, permeability and retention, as well as the lack of a suitable reference region limit the potential for RCI in human neurosurgery. In this study, an arterial input graphic analysis (AIGA) method is presented which is enabled by independent component analysis (ICA). The percent difference in arterial concentration between the image-derived arterial input function (AIFICA) and that obtained by an invasive method (ICACAR) was 2.0 ± 2.7% during the first hour of circulation of a targeted-untargeted dye pair in mice. Estimates of distribution volume and receptor concentration in tumor bearing mice (n = 5) recovered using the AIGA technique did not differ significantly from values obtained using invasive AIF measurements (p=0.12). The AIGA method, enabled by the subject-specific AIFICA, was also applied in a rat orthotopic model of U-251 glioblastoma to obtain the first reported receptor concentration and distribution volume maps during open craniotomy. PMID:26349671
NASA Astrophysics Data System (ADS)
Vulpiani, Gianfranco; Ripepe, Maurizio
2017-04-01
The detection and quantitative retrieval of ash plumes is of significant interest due to the environmental, climatic, and socioeconomic effects of ash fallout which might cause hardship and damages in areas surrounding volcanoes, representing a serious hazard to aircrafts. Real-time monitoring of such phenomena is crucial for initializing ash dispersion models. Ground-based and space-borne remote sensing observations provide essential information for scientific and operational applications. Satellite visible-infrared radiometric observations from geostationary platforms are usually exploited for long-range trajectory tracking and for measuring low-level eruptions. Their imagery is available every 10-30 min and suffers from a relatively poor spatial resolution. Moreover, the field of view of geostationary radiometric measurements may be blocked by water and ice clouds at higher levels and the observations' overall utility is reduced at night. Ground-based microwave weather radars may represent an important tool for detecting and, to a certain extent, mitigating the hazards presented by ash clouds. The possibility of monitoring in all weather conditions at a fairly high spatial resolution (less than a few hundred meters) and every few minutes after the eruption is the major advantage of using ground-based microwave radar systems. Ground-based weather radar systems can also provide data for estimating the ash volume, total mass, and height of eruption clouds. Previous methodological studies have investigated the possibility of using ground-based single- and dual-polarization radar system for the remote sensing of volcanic ash cloud. In the present work, methodology was revised to overcome some limitations related to the assumed microphysics. New scattering simulations based on the T-matrix solution technique were used to set up the parametric algorithms adopted to estimate the mass concentration and ash mean diameter. Furthermore, because quantitative estimation of the erupted materials in the proximity of the volcano's vent is crucial for initializing transportation models, a novel methodology for estimating a volcano eruption's mass discharge rate based on the combination of radar and a thermal camera was developed. We show how it is possible to calculate the mass flow using radar-derived ash concentration and particle diameter at the base of the eruption column using the exit velocity estimated by the thermal camera. The proposed procedure was tested on four Etna eruption episodes that occurred in December 2015 as observed by the available network of C and X band radar systems. The results are congruent with other independent methodologies and observations . The agreement between the total erupted mass derived by the retrieved MDR and the plume concentration can be considered as a self-consistent methodological assessment. Interestingly, the analysis of the polarimetric radar observations allowed us to derive some features of the ash plume, including the size of the eruption column and the height of the gas thrust region.
Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model
NASA Astrophysics Data System (ADS)
Chen, Huaizhen; Zhang, Guangzhi
2018-03-01
Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frey, K.A.; Hichwa, R.D.; Ehrenkaufer, R.L.
1985-10-01
A tracer kinetic method is developed for the in vivo estimation of high-affinity radioligand binding to central nervous system receptors. Ligand is considered to exist in three brain pools corresponding to free, nonspecifically bound, and specifically bound tracer. These environments, in addition to that of intravascular tracer, are interrelated by a compartmental model of in vivo ligand distribution. A mathematical description of the model is derived, which allows determination of regional blood-brain barrier permeability, nonspecific binding, the rate of receptor-ligand association, and the rate of dissociation of bound ligand, from the time courses of arterial blood and tissue tracer concentrations.more » The term ''free receptor density'' is introduced to describe the receptor population measured by this method. The technique is applied to the in vivo determination of regional muscarinic acetylcholine receptors in the rat, with the use of (TH)scopolamine. Kinetic estimates of free muscarinic receptor density are in general agreement with binding capacities obtained from previous in vivo and in vitro equilibrium binding studies. In the striatum, however, kinetic estimates of free receptor density are less than those in the neocortex--a reversal of the rank ordering of these regions derived from equilibrium determinations. A simplified model is presented that is applicable to tracers that do not readily dissociate from specific binding sites during the experimental period.« less
Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi
2015-01-01
Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi
2015-10-01
Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.
Michael J. Firko; Jane Leslie Hayes
1990-01-01
Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...
2016-09-05
46 was performed on an LTQ-Orbitrap Elite MS and the final quantitation was derived by 47 comparing the relative response of the 200 fmol AQUA...shown in Figure 3B, the final quantitation is derived by comparing the 527 relative response of the 200 fmol AQUA standards (SEE and IRSEE: Set 1) to...measure of eVLP quality, the western blot 553 and LC-HRMS quantitation results were compared to survival data in mice for each of these 554 eVLP vaccine
Lankford, Christopher L; Does, Mark D
2018-02-01
Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Quantitative Gd-DOTA uptake from cerebrospinal fluid into rat brain using 3D VFA-SPGR at 9.4T.
Lee, Hedok; Mortensen, Kristian; Sanggaard, Simon; Koch, Palle; Brunner, Hans; Quistorff, Bjørn; Nedergaard, Maiken; Benveniste, Helene
2018-03-01
We propose a quantitative technique to assess solute uptake into the brain parenchyma based on dynamic contrast-enhanced MRI (DCE-MRI). With this approach, a small molecular weight paramagnetic contrast agent (Gd-DOTA) is infused in the cerebral spinal fluid (CSF) and whole brain gadolinium concentration maps are derived. We implemented a 3D variable flip angle spoiled gradient echo (VFA-SPGR) longitudinal relaxation time (T1) technique, the accuracy of which was cross-validated by way of inversion recovery rapid acquisition with relaxation enhancement (IR-RARE) using phantoms. Normal Wistar rats underwent Gd-DOTA infusion into CSF via the cisterna magna and continuous MRI for approximately 130 min using T1-weighted imaging. Dynamic Gd-DOTA concentration maps were calculated and parenchymal uptake was estimated. In the phantom study, T1 discrepancies between the VFA-SPGR and IR-RARE sequences were approximately 6% with a transmit coil inhomogeneity correction. In the in vivo study, contrast transport profiles indicated maximal parenchymal retention of approximately 19% relative to the total amount delivered into the cisterna magna. Imaging strategies for accurate 3D contrast concentration mapping at 9.4T were developed and whole brain dynamic concentration maps were derived to study solute transport via the glymphatic system. The newly developed approach will enable future quantitative studies of the glymphatic system in health and disease states. Magn Reson Med 79:1568-1578, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Algamal, Z Y; Lee, M H
2017-01-01
A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.
Yost, Lisa J; Rodricks, Joseph D; Turnbull, Duncan; DeLeo, Paul C; Nash, J Frank; Quiñones-Rivera, Antonio; Carlson, Pete A
2016-10-01
A quantitative human risk assessment of chloroxylenol was conducted for liquid hand and dishwashing soap products used by consumers and health-care workers. The toxicological data for chloroxylenol indicate lack of genotoxicity, no evidence of carcinogenicity, and minimal systemic toxicity. No observed adverse effect levels (NOAEL) were established from chronic toxicity studies, specifically a carcinogenicity study that found no cancer excess (18 mg/kg-day) and studies of developmental and reproductive toxicity (100 mg/kg-day). Exposure to chloroxylenol for adults and children was estimated for two types of rinse-off cleaning products, one liquid hand soap, and two dishwashing products. The identified NOAELs were used together with exposure estimates to derive margin of exposure (MOE) estimates for chloroxylenol (i.e., estimates of exposure over NOAELs). These estimates were designed with conservative assumptions and likely overestimate exposure and risk (i.e., highest frequency, 100% dermal penetration). The resulting MOEs ranged from 178 to over 100, 000, 000 indicating negligibly small potential for harm related to consumer or health-care worker exposure to chloroxylenol in liquid soaps used in dish washing and hand washing. Copyright © 2016 Elsevier Inc. All rights reserved.
Data analysis in emission tomography using emission-count posteriors
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS
While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages.
Choi, Youn-Kyung; Kim, Jinmi; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Kim, Yong-Il
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5-18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages
Choi, Youn-Kyung; Kim, Jinmi; Maki, Koutaro; Ko, Ching-Chang
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level. PMID:27340668
Li, Ming Ze; Gao, Yuan Ke; Di, Xue Ying; Fan, Wen Yi
2016-03-01
The moisture content of forest surface soil is an important parameter in forest ecosystems. It is practically significant for forest ecosystem related research to use microwave remote sensing technology for rapid and accurate estimation of the moisture content of forest surface soil. With the aid of TDR-300 soil moisture content measuring instrument, the moisture contents of forest surface soils of 120 sample plots at Tahe Forestry Bureau of Daxing'anling region in Heilongjiang Province were measured. Taking the moisture content of forest surface soil as the dependent variable and the polarization decomposition parameters of C band Quad-pol SAR data as independent variables, two types of quantitative estimation models (multilinear regression model and BP-neural network model) for predicting moisture content of forest surface soils were developed. The spatial distribution of moisture content of forest surface soil on the regional scale was then derived with model inversion. Results showed that the model precision was 86.0% and 89.4% with RMSE of 3.0% and 2.7% for the multilinear regression model and the BP-neural network model, respectively. It indicated that the BP-neural network model had a better performance than the multilinear regression model in quantitative estimation of the moisture content of forest surface soil. The spatial distribution of forest surface soil moisture content in the study area was then obtained by using the BP neural network model simulation with the Quad-pol SAR data.
NASA Astrophysics Data System (ADS)
Healy, R. M.; Sciare, J.; Poulain, L.; Crippa, M.; Wiedensohler, A.; Prévôt, A. S. H.; Baltensperger, U.; Sarda-Estève, R.; McGuire, M. L.; Jeong, C.-H.; McGillicuddy, E.; O'Connor, I. P.; Sodeau, J. R.; Evans, G. J.; Wenger, J. C.
2013-09-01
Single-particle mixing state information can be a powerful tool for assessing the relative impact of local and regional sources of ambient particulate matter in urban environments. However, quantitative mixing state data are challenging to obtain using single-particle mass spectrometers. In this study, the quantitative chemical composition of carbonaceous single particles has been determined using an aerosol time-of-flight mass spectrometer (ATOFMS) as part of the MEGAPOLI 2010 winter campaign in Paris, France. Relative peak areas of marker ions for elemental carbon (EC), organic aerosol (OA), ammonium, nitrate, sulfate and potassium were compared with concurrent measurements from an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS), a thermal-optical OCEC analyser and a particle into liquid sampler coupled with ion chromatography (PILS-IC). ATOFMS-derived estimated mass concentrations reproduced the variability of these species well (R2 = 0.67-0.78), and 10 discrete mixing states for carbonaceous particles were identified and quantified. The chemical mixing state of HR-ToF-AMS organic aerosol factors, resolved using positive matrix factorisation, was also investigated through comparison with the ATOFMS dataset. The results indicate that hydrocarbon-like OA (HOA) detected in Paris is associated with two EC-rich mixing states which differ in their relative sulfate content, while fresh biomass burning OA (BBOA) is associated with two mixing states which differ significantly in their OA / EC ratios. Aged biomass burning OA (OOA2-BBOA) was found to be significantly internally mixed with nitrate, while secondary, oxidised OA (OOA) was associated with five particle mixing states, each exhibiting different relative secondary inorganic ion content. Externally mixed secondary organic aerosol was not observed. These findings demonstrate the range of primary and secondary organic aerosol mixing states in Paris. Examination of the temporal behaviour and chemical composition of the ATOFMS classes also enabled estimation of the relative contribution of transported emissions of each chemical species and total particle mass in the size range investigated. Only 22% of the total ATOFMS-derived particle mass was apportioned to fresh, local emissions, with 78% apportioned to regional/continental-scale emissions.
NASA Astrophysics Data System (ADS)
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-01
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-13
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja
2016-11-01
To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.
Application of a Topological Metric for Assessing Numerical Ocean Models with Satellite Observations
NASA Astrophysics Data System (ADS)
Morey, S. L.; Dukhovskoy, D. S.; Hiester, H. R.; Garcia-Pineda, O. G.; MacDonald, I. R.
2015-12-01
Satellite-based sensors provide a vast amount of observational data over the world ocean. Active microwave radars measure changes in sea surface height and backscattering from surface waves. Data from passive radiometers sensing emissions in multiple spectral bands can directly measure surface temperature, be combined with other data sources to estimate salinity, or processed to derive estimates of optically significant quantities, such as concentrations of biochemical properties. Estimates of the hydrographic variables can be readily used for assimilation or assessment of hydrodynamic ocean models. Optical data, however, have been underutilized in ocean circulation modeling. Qualitative assessments of oceanic fronts and other features commonly associated with changes in optically significant quantities are often made through visual comparison. This project applies a topological approach, borrowed from the field of computer image recognition, to quantitatively evaluate ocean model simulations of features that are related to quantities inferred from satellite imagery. The Modified Hausdorff Distance (MHD) provides a measure of the similarity of two shapes. Examples of applications of the MHD to assess ocean circulation models are presented. The first application assesses several models' representation of the freshwater plume structure from the Mississippi River, which is associated with a significant expression of color, using a satellite-derived ocean color index. Even though the variables being compared (salinity and ocean color index) differ, the MHD allows contours of the fields to be compared topologically. The second application assesses simulations of surface oil transport driven by winds and ocean model currents using surface oil maps derived from synthetic aperture radar backscatter data. In this case, maps of time composited oil coverage are compared between the simulations and satellite observations.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
In the source-free mantle/frozen-flux core magnetic earth model, the non-linear inverse steady motional induction problem was solved using the method presented in Part 1B. How that method was applied to estimate steady, broad-scale fluid velocity fields near the top of Earth's core that induce the secular change indicated by the Definitive Geomagnetic Reference Field (DGRF) models from 1945 to 1980 are described. Special attention is given to the derivation of weight matrices for the DGRF models because the weights determine the apparent significance of the residual secular change. The derived weight matrices also enable estimation of the secular change signal-to-noise ratio characterizing the DGRF models. Two types of weights were derived in 1987-88: radial field weights for fitting the evolution of the broad-scale portion of the radial geomagnetic field component at Earth's surface implied by the DGRF's, and general weights for fitting the evolution of the broad-scale portion of the scalar potential specified by these models. The difference is non-trivial because not all the geomagnetic data represented by the DGRF's constrain the radial field component. For radial field weights (or general weights), a quantitatively acceptable explication of broad-scale secular change relative to the 1980 Magsat epoch must account for 99.94271 percent (or 99.98784 percent) of the total weighted variance accumulated therein. Tolerable normalized root-mean-square weighted residuals of 2.394 percent (or 1.103 percent) are less than the 7 percent errors expected in the source-free mantle/frozen-flux core approximation.
Global snowfall: A combined CloudSat, GPM, and reanalysis perspective.
NASA Astrophysics Data System (ADS)
Milani, Lisa; Kulie, Mark S.; Skofronick-Jackson, Gail; Munchak, S. Joseph; Wood, Norman B.; Levizzani, Vincenzo
2017-04-01
Quantitative global snowfall estimates derived from multi-year data records will be presented to highlight recent advances in high latitude precipitation retrievals using spaceborne observations. More specifically, the analysis features the 2006-2016 CloudSat Cloud Profiling Radar (CPR) and the 2014-2016 Global Precipitation (GPM) Microwave Imager (GMI) and Dual-frequency Precipitation Radar (DPR) observational datasets and derived products. The ERA-Interim reanalysis dataset is also used to define the meteorological context and an independent combined modeling/observational evaluation dataset. An overview is first provided of CloudSat CPR-derived results that have stimulated significant recent research regarding global snowfall, including seasonal analyses of unique snowfall modes. GMI and DPR global annual snowfall retrievals are then evaluated against the CloudSat estimates to highlight regions where the datasets provide both consistent and diverging snowfall estimates. A hemispheric seasonal analysis for both datasets will also be provided. These comparisons aim at providing a unified global snowfall characterization that leverages the respective instrument's strengths. Attention will also be devoted to regions around the globe that experience unique snowfall modes. For instance, CloudSat has demonstrated an ability to effectively discern snowfall produced by shallow cumuliform cloud structures (e.g., lake/ocean-induced convective snow produced by air/water interactions associated with seasonal cold air outbreaks). The CloudSat snowfall database also reveals prevalent seasonal shallow cumuliform snowfall trends over climate-sensitive regions like the Greenland Ice Sheet. Other regions with unique snowfall modes, such as the US East Coast winter storm track zone that experiences intense snowfall rates directly associated with strong low pressure systems, will also be highlighted to demonstrate GPM's observational effectiveness. Linkages between CloudSat and GPM global snowfall analyses and independent ERA-Interim datasets will also be presented as a final evaluation exercise.
Stochastic time series analysis of fetal heart-rate variability
NASA Astrophysics Data System (ADS)
Shariati, M. A.; Dripps, J. H.
1990-06-01
Fetal Heart Rate(FHR) is one of the important features of fetal biophysical activity and its long term monitoring is used for the antepartum(period of pregnancy before labour) assessment of fetal well being. But as yet no successful method has been proposed to quantitatively represent variety of random non-white patterns seen in FHR. Objective of this paper is to address this issue. In this study the Box-Jenkins method of model identification and diagnostic checking was used on phonocardiographic derived FHR(averaged) time series. Models remained exclusively autoregressive(AR). Kalman filtering in conjunction with maximum likelihood estimation technique forms the parametric estimator. Diagnosrics perfonned on the residuals indicated that a second order model may be adequate in capturing type of variability observed in 1 up to 2 mm data windows of FHR. The scheme may be viewed as a means of data reduction of a highly redundant information source. This allows a much more efficient transmission of FHR information from remote locations to places with facilities and expertise for doser analysis. The extracted parameters is aimed to reflect numerically the important FHR features. These are normally picked up visually by experts for their assessments. As a result long term FHR recorded during antepartum period could then be screened quantitatively for detection of patterns considered normal or abnonnal. 1.
Using Weighted Entropy to Rank Chemicals in Quantitative High Throughput Screening Experiments
Shockley, Keith R.
2014-01-01
Quantitative high throughput screening (qHTS) experiments can simultaneously produce concentration-response profiles for thousands of chemicals. In a typical qHTS study, a large chemical library is subjected to a primary screen in order to identify candidate hits for secondary screening, validation studies or prediction modeling. Different algorithms, usually based on the Hill equation logistic model, have been used to classify compounds as active or inactive (or inconclusive). However, observed concentration-response activity relationships may not adequately fit a sigmoidal curve. Furthermore, it is unclear how to prioritize chemicals for follow-up studies given the large uncertainties that often accompany parameter estimates from nonlinear models. Weighted Shannon entropy can address these concerns by ranking compounds according to profile-specific statistics derived from estimates of the probability mass distribution of response at the tested concentration levels. This strategy can be used to rank all tested chemicals in the absence of a pre-specified model structure or the approach can complement existing activity call algorithms by ranking the returned candidate hits. The weighted entropy approach was evaluated here using data simulated from the Hill equation model. The procedure was then applied to a chemical genomics profiling data set interrogating compounds for androgen receptor agonist activity. PMID:24056003
Exact comprehensive equations for the photon management properties of silicon nanowire
Li, Yingfeng; Li, Meicheng; Li, Ruike; Fu, Pengfei; Wang, Tai; Luo, Younan; Mbengue, Joseph Michel; Trevor, Mwenya
2016-01-01
Unique photon management (PM) properties of silicon nanowire (SiNW) make it an attractive building block for a host of nanowire photonic devices including photodetectors, chemical and gas sensors, waveguides, optical switches, solar cells, and lasers. However, the lack of efficient equations for the quantitative estimation of the SiNW’s PM properties limits the rational design of such devices. Herein, we establish comprehensive equations to evaluate several important performance features for the PM properties of SiNW, based on theoretical simulations. Firstly, the relationships between the resonant wavelengths (RW), where SiNW can harvest light most effectively, and the size of SiNW are formulized. Then, equations for the light-harvesting efficiency at RW, which determines the single-frequency performance limit of SiNW-based photonic devices, are established. Finally, equations for the light-harvesting efficiency of SiNW in full-spectrum, which are of great significance in photovoltaics, are established. Furthermore, using these equations, we have derived four extra formulas to estimate the optimal size of SiNW in light-harvesting. These equations can reproduce majority of the reported experimental and theoretical results with only ~5% error deviations. Our study fills up a gap in quantitatively predicting the SiNW’s PM properties, which will contribute significantly to its practical applications. PMID:27103087
NASA Astrophysics Data System (ADS)
Cook, Ellyn J.; van der Kaars, Sander
2006-10-01
We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright
Quantitative analysis of Paratethys sea level change during the Messinian Salinity Crisis
NASA Astrophysics Data System (ADS)
de la Vara, Alba; Meijer, Paul; van Baak, Christiaan; Marzocchi, Alice; Grothe, Arjen
2016-04-01
At the time of the Messinian Salinity Crisis in the Mediterranean Sea (i.e., the Pontian stage of the Paratethys), the Paratethys sea level dropped also. Evidence found in the sedimentary record of the Black Sea and the Caspian Sea has been interpreted to indicate that a sea level fall occurred between 5.6 and 5.5 Ma. Estimates for the magnitude of the fall range between tens of meters to more than 1500 m. The purpose of this study is to provide quantitative insight into the sensitivity of the water level of the Black Sea and the Caspian Sea to the hydrologic budget, for the case that the Paratethys is disconnected from the Mediterranean. Using a Late Miocene bathymetry based on a palaeographic map by Popov et al. (2004) we quantify the fall in sea level, the mean salinity, and the time to reach equilibrium for a wide range of negative hydrologic budgets. By combining our results with (i) estimates derived from a recent global Late Miocene climate simulation and (ii) reconstructed basin salinities, we are able to rule out a drop in sea level of the order of 1000 m in the Caspian Sea during this time period. In the Black Sea, however, such a large sea level fall cannot be fully discarded.
NASA Astrophysics Data System (ADS)
Gou, Y.
2017-12-01
Quantitative Precipitation Estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex space time variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3294 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profiles of reflectivity clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method in all precipitation events in terms of score comparison using validation gauge measurements as references, with higher correlation (than 75.74%), lower mean absolute error (than 82.38%) and root-mean-square error (than 89.04%) of all the comparative frames. It is also found that the SCIT-based approach can effectively mitigate the radar QPE local error and represent precipitation spatiotemporal variability better than RT-based scheme.
Macarthur, Roy; Feinberg, Max; Bertheau, Yves
2010-01-01
A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.
Properties of O dwarf stars in 30 Doradus
NASA Astrophysics Data System (ADS)
Sabín-Sanjulián, Carolina; VFTS Collaboration
2017-11-01
We perform a quantitative spectroscopic analysis of 105 presumably single O dwarf stars in 30 Doradus, located within the Large Magellanic Cloud. We use mid-to-high resolution multi-epoch optical spectroscopic data obtained within the VLT-FLAMES Tarantula Survey. Stellar and wind parameters are derived by means of the automatic tool iacob-gbat, which is based on a large grid of fastwind models. We also benefit from the Bayesian tool bonnsai to estimate evolutionary masses. We provide a spectral calibration for the effective temperature of O dwarf stars in the LMC, deal with the mass discrepancy problem and investigate the wind properties of the sample.
Gunning, Yvonne; Watson, Andrew D.; Rigby, Neil M.; Philo, Mark; Peazer, Joshua K.; Kemsley, E. Kate
2016-01-01
We describe a simple protocol for identifying and quantifying the two components in binary mixtures of species possessing one or more similar proteins. Central to the method is the identification of 'corresponding proteins' in the species of interest, in other words proteins that are nominally the same but possess species-specific sequence differences. When subject to proteolysis, corresponding proteins will give rise to some peptides which are likewise similar but with species-specific variants. These are 'corresponding peptides'. Species-specific peptides can be used as markers for species determination, while pairs of corresponding peptides permit relative quantitation of two species in a mixture. The peptides are detected using multiple reaction monitoring (MRM) mass spectrometry, a highly specific technique that enables peptide-based species determination even in complex systems. In addition, the ratio of MRM peak areas deriving from corresponding peptides supports relative quantitation. Since corresponding proteins and peptides will, in the main, behave similarly in both processing and in experimental extraction and sample preparation, the relative quantitation should remain comparatively robust. In addition, this approach does not need the standards and calibrations required by absolute quantitation methods. The protocol is described in the context of red meats, which have convenient corresponding proteins in the form of their respective myoglobins. This application is relevant to food fraud detection: the method can detect 1% weight for weight of horse meat in beef. The corresponding protein, corresponding peptide (CPCP) relative quantitation using MRM peak area ratios gives good estimates of the weight for weight composition of a horse plus beef mixture. PMID:27685654
Gunning, Yvonne; Watson, Andrew D; Rigby, Neil M; Philo, Mark; Peazer, Joshua K; Kemsley, E Kate
2016-09-20
We describe a simple protocol for identifying and quantifying the two components in binary mixtures of species possessing one or more similar proteins. Central to the method is the identification of 'corresponding proteins' in the species of interest, in other words proteins that are nominally the same but possess species-specific sequence differences. When subject to proteolysis, corresponding proteins will give rise to some peptides which are likewise similar but with species-specific variants. These are 'corresponding peptides'. Species-specific peptides can be used as markers for species determination, while pairs of corresponding peptides permit relative quantitation of two species in a mixture. The peptides are detected using multiple reaction monitoring (MRM) mass spectrometry, a highly specific technique that enables peptide-based species determination even in complex systems. In addition, the ratio of MRM peak areas deriving from corresponding peptides supports relative quantitation. Since corresponding proteins and peptides will, in the main, behave similarly in both processing and in experimental extraction and sample preparation, the relative quantitation should remain comparatively robust. In addition, this approach does not need the standards and calibrations required by absolute quantitation methods. The protocol is described in the context of red meats, which have convenient corresponding proteins in the form of their respective myoglobins. This application is relevant to food fraud detection: the method can detect 1% weight for weight of horse meat in beef. The corresponding protein, corresponding peptide (CPCP) relative quantitation using MRM peak area ratios gives good estimates of the weight for weight composition of a horse plus beef mixture.
Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.
Obuchowski, Nancy A; Bullen, Jennifer
2017-01-01
Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.
Lin, Huan-Ting; Okumura, Takashi; Yatsuda, Yukinori; Ito, Satoru; Nakauchi, Hiromitsu; Otsu, Makoto
2016-10-01
Stable gene transfer into target cell populations via integrating viral vectors is widely used in stem cell gene therapy (SCGT). Accurate vector copy number (VCN) estimation has become increasingly important. However, existing methods of estimation such as real-time quantitative PCR are more restricted in practicality, especially during clinical trials, given the limited availability of sample materials from patients. This study demonstrates the application of an emerging technology called droplet digital PCR (ddPCR) in estimating VCN states in the context of SCGT. Induced pluripotent stem cells (iPSCs) derived from a patient with X-linked chronic granulomatous disease were used as clonable target cells for transduction with alpharetroviral vectors harboring codon-optimized CYBB cDNA. Precise primer-probe design followed by multiplex analysis conferred assay specificity. Accurate estimation of per-cell VCN values was possible without reliance on a reference standard curve. Sensitivity was high and the dynamic range of detection was wide. Assay reliability was validated by observation of consistent, reproducible, and distinct VCN clustering patterns for clones of transduced iPSCs with varying numbers of transgene copies. Taken together, use of ddPCR appears to offer a practical and robust approach to VCN estimation with a wide range of clinical and research applications.
Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok
2016-05-01
To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.
Lin, Huan-Ting; Okumura, Takashi; Yatsuda, Yukinori; Ito, Satoru; Nakauchi, Hiromitsu; Otsu, Makoto
2016-01-01
Stable gene transfer into target cell populations via integrating viral vectors is widely used in stem cell gene therapy (SCGT). Accurate vector copy number (VCN) estimation has become increasingly important. However, existing methods of estimation such as real-time quantitative PCR are more restricted in practicality, especially during clinical trials, given the limited availability of sample materials from patients. This study demonstrates the application of an emerging technology called droplet digital PCR (ddPCR) in estimating VCN states in the context of SCGT. Induced pluripotent stem cells (iPSCs) derived from a patient with X-linked chronic granulomatous disease were used as clonable target cells for transduction with alpharetroviral vectors harboring codon-optimized CYBB cDNA. Precise primer–probe design followed by multiplex analysis conferred assay specificity. Accurate estimation of per-cell VCN values was possible without reliance on a reference standard curve. Sensitivity was high and the dynamic range of detection was wide. Assay reliability was validated by observation of consistent, reproducible, and distinct VCN clustering patterns for clones of transduced iPSCs with varying numbers of transgene copies. Taken together, use of ddPCR appears to offer a practical and robust approach to VCN estimation with a wide range of clinical and research applications. PMID:27763786
A New Method for Assessing How Sensitivity and Specificity of Linkage Studies Affects Estimation
Moore, Cecilia L.; Amin, Janaki; Gidding, Heather F.; Law, Matthew G.
2014-01-01
Background While the importance of record linkage is widely recognised, few studies have attempted to quantify how linkage errors may have impacted on their own findings and outcomes. Even where authors of linkage studies have attempted to estimate sensitivity and specificity based on subjects with known status, the effects of false negatives and positives on event rates and estimates of effect are not often described. Methods We present quantification of the effect of sensitivity and specificity of the linkage process on event rates and incidence, as well as the resultant effect on relative risks. Formulae to estimate the true number of events and estimated relative risk adjusted for given linkage sensitivity and specificity are then derived and applied to data from a prisoner mortality study. The implications of false positive and false negative matches are also discussed. Discussion Comparisons of the effect of sensitivity and specificity on incidence and relative risks indicate that it is more important for linkages to be highly specific than sensitive, particularly if true incidence rates are low. We would recommend that, where possible, some quantitative estimates of the sensitivity and specificity of the linkage process be performed, allowing the effect of these quantities on observed results to be assessed. PMID:25068293
Quantitative photoplethysmography: Lambert-Beer law or inverse function incorporating light scatter.
Cejnar, M; Kobler, H; Hunyor, S N
1993-03-01
Finger blood volume is commonly determined from measurement of infra-red (IR) light transmittance using the Lambert-Beer law of light absorption derived for use in non-scattering media, even when such transmission involves light scatter around the phalangeal bone. Simultaneous IR transmittance and finger volume were measured over the full dynamic range of vascular volumes in seven subjects and outcomes compared with data fitted according to the Lambert-Beer exponential function and an inverse function derived for light attenuation by scattering materials. Curves were fitted by the least-squares method and goodness of fit was compared using standard errors of estimate (SEE). The inverse function gave a better data fit in six of the subjects: mean SEE 1.9 (SD 0.7, range 0.7-2.8) and 4.6 (2.2, 2.0-8.0) respectively (p < 0.02, paired t-test). Thus, when relating IR transmittance to blood volume, as occurs in the finger during measurements of arterial compliance, an inverse function derived from a model of light attenuation by scattering media gives more accurate results than the traditional exponential fit.
Saucier, Cedric; Jourdes, Michael; Glories, Yves; Quideau, Stephane
2006-09-20
An extraction procedure and an analytical method have been developed to detect and quantify for the first time a series of ellagitannin derivatives formed in wine during aging in oak barrels. The method involves a preliminary purification step on XAD7 HP resin followed by a second purification step on TSK 40 HW gel. The resulting extract is analyzed for compound identification and quantitative determination by high-performance liquid chromatography-electrospray ionization-mass spectrometry in single ion recording mode. Reference compounds, which are accessible through hemisynthesis from the oak C-glycosidic ellagitannin vescalagin, were used to build calibration curves, and chlorogenic acid was selected as an internal standard. This method enabled us to estimate the content of four flavano-ellagitannins and that of another newly identified wine polyphenol, beta-1-O-ethylvescalagin, in a Bordeaux red wine aged for 18 months in oak barrels. All five ellagitannin derivatives are derived from the nucleophilic substitution reaction of vescalagin with the grape flavan-3-ols catechin and epicatechin or ethanol.
NASA Technical Reports Server (NTRS)
Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.
2011-01-01
Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.
Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.
2017-01-01
Knowledge of animal diets provides essential insights into their life history and ecology, although diet estimation is challenging and remains an active area of research. Quantitative fatty acid signature analysis (QFASA) has become a popular method of estimating diet composition, especially for marine species. A primary assumption of QFASA is that constants called calibration coefficients, which account for the differential metabolism of individual fatty acids, are known. In practice, however, calibration coefficients are not known, but rather have been estimated in feeding trials with captive animals of a limited number of model species. The impossibility of verifying the accuracy of feeding trial derived calibration coefficients to estimate the diets of wild animals is a foundational problem with QFASA that has generated considerable criticism. We present a new model that allows simultaneous estimation of diet composition and calibration coefficients based only on fatty acid signature samples from wild predators and potential prey. Our model performed almost flawlessly in four tests with constructed examples, estimating both diet proportions and calibration coefficients with essentially no error. We also applied the model to data from Chukchi Sea polar bears, obtaining diet estimates that were more diverse than estimates conditioned on feeding trial calibration coefficients. Our model avoids bias in diet estimates caused by conditioning on inaccurate calibration coefficients, invalidates the primary criticism of QFASA, eliminates the need to conduct feeding trials solely for diet estimation, and consequently expands the utility of fatty acid data to investigate aspects of ecology linked to animal diets.
Abortion and mental health: quantitative synthesis and analysis of research published 1995-2009.
Coleman, Priscilla K
2011-09-01
Given the methodological limitations of recently published qualitative reviews of abortion and mental health, a quantitative synthesis was deemed necessary to represent more accurately the published literature and to provide clarity to clinicians. To measure the association between abortion and indicators of adverse mental health, with subgroup effects calculated based on comparison groups (no abortion, unintended pregnancy delivered, pregnancy delivered) and particular outcomes. A secondary objective was to calculate population-attributable risk (PAR) statistics for each outcome. After the application of methodologically based selection criteria and extraction rules to minimise bias, the sample comprised 22 studies, 36 measures of effect and 877 181 participants (163 831 experienced an abortion). Random effects pooled odds ratios were computed using adjusted odds ratios from the original studies and PAR statistics were derived from the pooled odds ratios. Women who had undergone an abortion experienced an 81% increased risk of mental health problems, and nearly 10% of the incidence of mental health problems was shown to be attributable to abortion. The strongest subgroup estimates of increased risk occurred when abortion was compared with term pregnancy and when the outcomes pertained to substance use and suicidal behaviour. This review offers the largest quantitative estimate of mental health risks associated with abortion available in the world literature. Calling into question the conclusions from traditional reviews, the results revealed a moderate to highly increased risk of mental health problems after abortion. Consistent with the tenets of evidence-based medicine, this information should inform the delivery of abortion services.
NASA Astrophysics Data System (ADS)
Teuho, J.; Johansson, J.; Linden, J.; Saunavaara, V.; Tolvanen, T.; Teräs, M.
2014-01-01
Selection of reconstruction parameters has an effect on the image quantification in PET, with an additional contribution from a scanner-specific attenuation correction method. For achieving comparable results in inter- and intra-center comparisons, any existing quantitative differences should be identified and compensated for. In this study, a comparison between PET, PET/CT and PET/MR is performed by using an anatomical brain phantom, to identify and measure the amount of bias caused due to differences in reconstruction and attenuation correction methods especially in PET/MR. Differences were estimated by using visual, qualitative and quantitative analysis. The qualitative analysis consisted of a line profile analysis for measuring the reproduction of anatomical structures and the contribution of the amount of iterations to image contrast. The quantitative analysis consisted of measurement and comparison of 10 anatomical VOIs, where the HRRT was considered as the reference. All scanners reproduced the main anatomical structures of the phantom adequately, although the image contrast on the PET/MR was inferior when using a default clinical brain protocol. Image contrast was improved by increasing the amount of iterations from 2 to 5 while using 33 subsets. Furthermore, a PET/MR-specific bias was detected, which resulted in underestimation of the activity values in anatomical structures closest to the skull, due to the MR-derived attenuation map that ignores the bone. Thus, further improvements for the PET/MR reconstruction and attenuation correction could be achieved by optimization of RAMLA-specific reconstruction parameters and implementation of bone to the attenuation template.
Approaches to Observe Anthropogenic Aerosol-Cloud Interactions.
Quaas, Johannes
Anthropogenic aerosol particles exert an-quantitatively very uncertain-effective radiative forcing due to aerosol-cloud interactions via an immediate altering of cloud albedo on the one hand and via rapid adjustments by alteration of cloud processes and by changes in thermodynamic profiles on the other hand. Large variability in cloud cover and properties and the therefore low signal-to-noise ratio for aerosol-induced perturbations hamper the identification of effects in observations. Six approaches are discussed as a means to isolate the impact of anthropogenic aerosol on clouds from natural cloud variability to estimate or constrain the effective forcing. These are (i) intentional cloud modification, (ii) ship tracks, (iii) differences between the hemispheres, (iv) trace gases, (v) weekly cycles and (vi) trends. Ship track analysis is recommendable for detailed process understanding, and the analysis of weekly cycles and long-term trends is most promising to derive estimates or constraints on the effective radiative forcing.
Non-equilibrium thermionic electron emission for metals at high temperatures
NASA Astrophysics Data System (ADS)
Domenech-Garret, J. L.; Tierno, S. P.; Conde, L.
2015-08-01
Stationary thermionic electron emission currents from heated metals are compared against an analytical expression derived using a non-equilibrium quantum kappa energy distribution for the electrons. The latter depends on the temperature decreasing parameter κ ( T ) , which decreases with increasing temperature and can be estimated from raw experimental data and characterizes the departure of the electron energy spectrum from equilibrium Fermi-Dirac statistics. The calculations accurately predict the measured thermionic emission currents for both high and moderate temperature ranges. The Richardson-Dushman law governs electron emission for large values of kappa or equivalently, moderate metal temperatures. The high energy tail in the electron energy distribution function that develops at higher temperatures or lower kappa values increases the emission currents well over the predictions of the classical expression. This also permits the quantitative estimation of the departure of the metal electrons from the equilibrium Fermi-Dirac statistics.
Devolatilization Analysis in a Twin Screw Extruder by using the Flow Analysis Network (FAN) Method
NASA Astrophysics Data System (ADS)
Tomiyama, Hideki; Takamoto, Seiji; Shintani, Hiroaki; Inoue, Shigeki
We derived the theoretical formulas for three mechanisms of devolatilization in a twin screw extruder. These are flash, surface refreshment and forced expansion. The method for flash devolatilization is based on the equation of equilibrium concentration which shows that volatiles break off from polymer when they are relieved from high pressure condition. For surface refreshment devolatilization, we applied Latinen's model to allow estimation of polymer behavior in the unfilled screw conveying condition. Forced expansion devolatilization is based on the expansion theory in which foams are generated under reduced pressure and volatiles are diffused on the exposed surface layer after mixing with the injected devolatilization agent. Based on these models, we developed the simulation software of twin-screw extrusion by the FAN method and it allows us to quantitatively estimate volatile concentration and polymer temperature with a high accuracy in the actual multi-vent extrusion process for LDPE + n-hexane.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
NASA Technical Reports Server (NTRS)
Barker, R. E., Jr.; Campbell, K. W.
1985-01-01
The applicability of classical nucleation theory to second (and higher) order thermodynamic transitions in the Ehrenfest sense has been investigated and expressions have been derived upon which the qualitative and quantitative success of the basic approach must ultimately depend. The expressions describe the effect of temperature undercooling, hydrostatic pressure, and tensile stress upon the critical parameters, the critical nucleus size, and critical free energy barrier, for nucleation in a thermodynamic transition of any general order. These expressions are then specialized for the case of first and second order transitions. The expressions for the case of undercooling are then used in conjunction with literature data to estimate values for the critical quantities in a system undergoing a pseudo-second order transition (the glass transition in polystyrene). Methods of estimating the interfacial energy gamma in systems undergoing a first and second order transition are also discussed.
Monitoring Crop Yield in USA Using a Satellite-Based Climate-Variability Impact Index
NASA Technical Reports Server (NTRS)
Zhang, Ping; Anderson, Bruce; Tan, Bin; Barlow, Mathew; Myneni, Ranga
2011-01-01
A quantitative index is applied to monitor crop growth and predict agricultural yield in continental USA. The Climate-Variability Impact Index (CVII), defined as the monthly contribution to overall anomalies in growth during a given year, is derived from 1-km MODIS Leaf Area Index. The growing-season integrated CVII can provide an estimate of the fractional change in overall growth during a given year. In turn these estimates can provide fine-scale and aggregated information on yield for various crops. Trained from historical records of crop production, a statistical model is used to produce crop yield during the growing season based upon the strong positive relationship between crop yield and the CVII. By examining the model prediction as a function of time, it is possible to determine when the in-season predictive capability plateaus and which months provide the greatest predictive capacity.
Use of statistical and neural net approaches in predicting toxicity of chemicals.
Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D
2000-01-01
Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.
NASA Astrophysics Data System (ADS)
Hopkins, J.; Balch, W. M.; Henson, S.; Poulton, A. J.; Drapeau, D.; Bowler, B.; Lubelczyk, L.
2016-02-01
Coccolithophores, the single celled phytoplankton that produce an outer covering of calcium carbonate coccoliths, are considered to be the greatest contributors to the global oceanic particulate inorganic carbon (PIC) pool. The reflective coccoliths scatter light back out from the ocean surface, enabling PIC concentration to be quantitatively estimated from ocean color satellites. Here we use datasets of AQUA MODIS PIC concentration from 2003-2014 (using the recently-revised PIC algorithm), as well as statistics on coccolithophore vertical distribution derived from cruises throughout the world ocean, to estimate the average global (surface and integrated) PIC standing stock and its associated inter-annual variability. In addition, we divide the global ocean into Longhurst biogeochemical provinces, update the PIC biomass statistics and identify those regions that have the greatest inter-annual variability and thus may exert the greatest influence on global PIC standing stock and the alkalinity pump.
Spectral F-test power evaluation in the EEG during intermittent photic stimulation.
de Sá, Antonio Mauricio F L Miranda; Cagy, Mauricio; Lazarev, Vladimir V; Infantosi, Antonio Fernando C
2006-06-01
Intermittent photic stimulation (IPS) is an important functional test, which can induce the photic driving in the electroencephalogram (EEG). It is capable of enhancing latent oscillations manifestations not present in the resting EEG. However, for adequate quantitative evaluation of the photic driving, these changes should be assessed on a statistical basis. With this aim, the sampling distribution of spectral F test was investigated. On this basis, confidence limits of the SFT-estimate could be obtained for different practical situations, in which the signal-to-noise ratio and the number of epochs used in the estimation may vary. The technique was applied to the EEG of 10 normal subjects during IPS, and allowed detecting responses not only at the fundamental IPS frequency but also at higher harmonics. It also permitted to assess the strength of the photic driving responses and to compare them in different derivations and in different subjects.
TOXNET: Toxicology Data Network
... 4. Supporting Data for Carcinogenicity Expand II.B. Quantitative Estimate of Carcinogenic Risk from Oral Exposure II. ... of Confidence (Carcinogenicity, Oral Exposure) Expand II.C. Quantitative Estimate of Carcinogenic Risk from Inhalation Exposure II. ...
Quantitative estimate of commercial fish enhancement by seagrass habitat in southern Australia
NASA Astrophysics Data System (ADS)
Blandon, Abigayil; zu Ermgassen, Philine S. E.
2014-03-01
Seagrass provides many ecosystem services that are of considerable value to humans, including the provision of nursery habitat for commercial fish stock. Yet few studies have sought to quantify these benefits. As seagrass habitat continues to suffer a high rate of loss globally and with the growing emphasis on compensatory restoration, valuation of the ecosystem services associated with seagrass habitat is increasingly important. We undertook a meta-analysis of juvenile fish abundance at seagrass and control sites to derive a quantitative estimate of the enhancement of juvenile fish by seagrass habitats in southern Australia. Thirteen fish of commercial importance were identified as being recruitment enhanced in seagrass habitat, twelve of which were associated with sufficient life history data to allow for estimation of total biomass enhancement. We applied von Bertalanffy growth models and species-specific mortality rates to the determined values of juvenile enhancement to estimate the contribution of seagrass to commercial fish biomass. The identified species were enhanced in seagrass by 0.98 kg m-2 y-1, equivalent to ˜$A230,000 ha-1 y-1. These values represent the stock enhancement where all fish species are present, as opposed to realized catches. Having accounted for the time lag between fish recruiting to a seagrass site and entering the fishery and for a 3% annual discount rate, we find that seagrass restoration efforts costing $A10,000 ha-1 have a potential payback time of less than five years, and that restoration costing $A629,000 ha-1 can be justified on the basis of enhanced commercial fish recruitment where these twelve fish species are present.
Quantitative health impact of indoor radon in France.
Ajrouche, Roula; Roudier, Candice; Cléro, Enora; Ielsch, Géraldine; Gay, Didier; Guillevic, Jérôme; Marant Micallef, Claire; Vacquier, Blandine; Le Tertre, Alain; Laurier, Dominique
2018-05-08
Radon is the second leading cause of lung cancer after smoking. Since the previous quantitative risk assessment of indoor radon conducted in France, input data have changed such as, estimates of indoor radon concentrations, lung cancer rates and the prevalence of tobacco consumption. The aim of this work was to update the risk assessment of lung cancer mortality attributable to indoor radon in France using recent risk models and data, improving the consideration of smoking, and providing results at a fine geographical scale. The data used were population data (2012), vital statistics on death from lung cancer (2008-2012), domestic radon exposure from a recent database that combines measurement results of indoor radon concentration and the geogenic radon potential map for France (2015), and smoking prevalence (2010). The risk model used was derived from a European epidemiological study, considering that lung cancer risk increased by 16% per 100 becquerels per cubic meter (Bq/m 3 ) indoor radon concentration. The estimated number of lung cancer deaths attributable to indoor radon exposure is about 3000 (1000; 5000), which corresponds to about 10% of all lung cancer deaths each year in France. About 33% of lung cancer deaths attributable to radon are due to exposure levels above 100 Bq/m 3 . Considering the combined effect of tobacco and radon, the study shows that 75% of estimated radon-attributable lung cancer deaths occur among current smokers, 20% among ex-smokers and 5% among never-smokers. It is concluded that the results of this study, which are based on precise estimates of indoor radon concentrations at finest geographical scale, can serve as a basis for defining French policy against radon risk.
Boe, Shaun G; Rice, Charles L; Doherty, Timothy J
2008-04-01
To assess the utility of the surface electromyographic signal as a means of estimating the level of muscle force during quantitative electromyography studies by examining the relationship between muscle force and the amplitude of the surface electromyographic activity signal; and to determine the impact of a reduction in the number of motor units on this relationship, through inclusion of a sample of patients with neuromuscular disease. Cross-sectional, cohort study design. Tertiary care, ambulatory, electromyography laboratory. A volunteer, convenience sample of healthy control subjects (n=10), patients with amyotrophic lateral sclerosis (n=9), and patients with Charcot-Marie-Tooth disease type X (n=5). Not applicable. The first dorsal interosseous (FDI) and biceps brachii muscles were examined. Force values (at 10% increments) were calculated from two 4-second maximal voluntary contractions (MVCs). Surface electromyographic activity was recorded during separate 4-second voluntary contractions at 9 force increments (10% -90% of MVC). Additionally, a motor unit number estimate was derived for each subject to quantify the degree of motor unit loss in patients relative to control subjects. The relationships between force and surface electromyographic activity for both muscles (controls and patients) were best fit by a linear function. The variability about the grouped regression lines was quantified by 95% confidence intervals and found to be +/-6.7% (controls) and +/-8.5% (patients) for the FDI and +/-5% (controls) and +/-6.1% (patients) for the biceps brachii. These results suggest that the amplitude of the surface electromyographic activity signal may be used as a means of estimating the level of muscle force during quantitative electromyography studies. Future studies should be directed at examining if the variability associated with these force and surface electromyographic activity relationships is acceptable in replacing previous methods of measuring muscle force.
Holeski, Liza M; Monnahan, Patrick; Koseva, Boryana; McCool, Nick; Lindroth, Richard L; Kelly, John K
2014-03-13
Genotyping-by-sequencing methods have vastly improved the resolution and accuracy of genetic linkage maps by increasing both the number of marker loci as well as the number of individuals genotyped at these loci. Using restriction-associated DNA sequencing, we construct a dense linkage map for a panel of recombinant inbred lines derived from a cross between divergent ecotypes of Mimulus guttatus. We used this map to estimate recombination rate across the genome and to identify quantitative trait loci for the production of several secondary compounds (PPGs) of the phenylpropanoid pathway implicated in defense against herbivores. Levels of different PPGs are correlated across recombinant inbred lines suggesting joint regulation of the phenylpropanoid pathway. However, the three quantitative trait loci identified in this study each act on a distinct PPG. Finally, we map three putative genomic inversions differentiating the two parental populations, including a previously characterized inversion that contributes to life-history differences between the annual/perennial ecotypes. Copyright © 2014 Holeski et al.
A Unified Theory of Impact Crises and Mass Extinctions: Quantitative Tests
NASA Technical Reports Server (NTRS)
Rampino, Michael R.; Haggerty, Bruce M.; Pagano, Thomas C.
1997-01-01
Several quantitative tests of a general hypothesis linking impacts of large asteroids and comets with mass extinctions of life are possible based on astronomical data, impact dynamics, and geological information. The waiting of large-body impacts on the Earth derive from the flux of Earth-crossing asteroids and comets, and the estimated size of impacts capable of causing large-scale environmental disasters, predict that impacts of objects greater than or equal to 5 km in diameter (greater than or equal to 10 (exp 7) Mt TNT equivalent) could be sufficient to explain the record of approximately 25 extinction pulses in the last 540 Myr, with the 5 recorded major mass extinctions related to impacts of the largest objects of greater than or equal to 10 km in diameter (greater than or equal to 10(exp 8) Mt Events). Smaller impacts (approximately 10 (exp 6) Mt), with significant regional environmental effects, could be responsible for the lesser boundaries in the geologic record.
Quantitative Characterization of Tissue Microstructure with Temporal Diffusion Spectroscopy
Xu, Junzhong; Does, Mark D.; Gore, John C.
2009-01-01
The signals recorded by diffusion-weighted magnetic resonance imaging (DWI) are dependent on the micro-structural properties of biological tissues, so it is possible to obtain quantitative structural information non-invasively from such measurements. Oscillating gradient spin echo (OGSE) methods have the ability to probe the behavior of water diffusion over different time scales and the potential to detect variations in intracellular structure. To assist in the interpretation of OGSE data, analytical expressions have been derived for diffusion-weighted signals with OGSE methods for restricted diffusion in some typical structures, including parallel planes, cylinders and spheres, using the theory of temporal diffusion spectroscopy. These analytical predictions have been confirmed with computer simulations. These expressions suggest how OGSE signals from biological tissues should be analyzed to characterize tissue microstructure, including how to estimate cell nuclear sizes. This approach provides a model to interpret diffusion data obtained from OGSE measurements that can be used for applications such as monitoring tumor response to treatment in vivo. PMID:19616979
Gilmore, Kevin J; Allen, Matti D; Doherty, Timothy J; Kimpinski, Kurt; Rice, Charles L
2017-09-01
We assessed motor unit (MU) properties and neuromuscular stability in the tibialis anterior (TA) of chronic inflammatory demyelinating polyneuropathy (CIDP) patients using decomposition-based quantitative electromyography. Dorsiflexion strength was assessed, and surface and concentric needle electromyography were sampled from the TA. Estimates of MU numbers were derived using decomposition-based quantitative electromyography and spike-triggered averaging. Neuromuscular transmission stability was assessed from concentric needle-detected MU potentials. CIDP patients had 43% lower compound muscle action potential amplitude than controls, and despite near-maximum voluntary activation, were 37% weaker. CIDP had 27% fewer functioning MUs in the TA, and had 90% and 44% higher jiggle and jitter values, respectively compared with controls. CIDP had lower strength and compound muscle action potential values, moderately fewer numbers of MUs, and significant neuromuscular instability compared with controls. Thus, in addition to muscle atrophy, voluntary weakness is also due to limitations of peripheral neural transmission consistent with demyelination. Muscle Nerve 56: 413-420, 2017. © 2016 Wiley Periodicals, Inc.
Bertacche, Vittorio; Pini, Elena; Stradi, Riccardo; Stratta, Fabio
2006-01-01
The purpose of this study is the development of a quantification method to detect the amount of amorphous cyclosporine using Fourier transform infrared (FTIR) spectroscopy. The mixing of different percentages of crystalline cyclosporine with amorphous cyclosporine was used to obtain a set of standards, composed of cyclosporine samples characterized by different percentages of amorphous cyclosporine. Using a wavelength range of 450-4,000 cm(-1), FTIR spectra were obtained from samples in potassium bromide pellets and then a partial least squares (PLS) model was exploited to correlate the features of the FTIR spectra with the percentage of amorphous cyclosporine in the samples. This model gave a standard error of estimate (SEE) of 0.3562, with an r value of 0.9971 and a standard error of prediction (SEP) of 0.4168, which derives from the cross validation function used to check the precision of the model. Statistical values reveal the applicability of the method to the quantitative determination of amorphous cyclosporine in crystalline cyclosporine samples.
Osago, Harumi; Shibata, Tomoko; Hara, Nobumasa; Kuwata, Suguru; Kono, Michihaya; Uchio, Yuji; Tsuchiya, Mikako
2014-12-15
We developed a method using liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS) with a selected reaction monitoring (SRM) mode for simultaneous quantitative analysis of glycosaminoglycans (GAGs). Using one-shot analysis with our MS/MS method, we demonstrated the simultaneous quantification of a total of 23 variously sulfated disaccharides of four GAG classes (8 chondroitin/dermatan sulfates, 1 hyaluronic acid, 12 heparan sulfates, and 2 keratan sulfates) with a sensitivity of less than 0.5 pmol within 20 min. We showed the differences in the composition of GAG classes and the sulfation patterns between porcine articular cartilage and yellow ligament. In addition to the internal disaccharides described above, some saccharides derived from the nonreducing terminal were detected simultaneously. The simultaneous quantification of both internal and nonreducing terminal saccharides could be useful to estimate the chain length of GAGs. This method would help to establish comprehensive "GAGomic" analysis of biological tissues. Copyright © 2014 Elsevier Inc. All rights reserved.
A novel 3D imaging system for strawberry phenotyping.
He, Joe Q; Harrison, Richard J; Li, Bo
2017-01-01
Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. A low cost multi-view stereo (MVS) imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique.
NASA Astrophysics Data System (ADS)
Lange, Rense
2015-02-01
An extension of concurrent validity is proposed that uses qualitative data for the purpose of validating quantitative measures. The approach relies on Latent Semantic Analysis (LSA) which places verbal (written) statements in a high dimensional semantic space. Using data from a medical / psychiatric domain as a case study - Near Death Experiences, or NDE - we established concurrent validity by connecting NDErs qualitative (written) experiential accounts with their locations on a Rasch scalable measure of NDE intensity. Concurrent validity received strong empirical support since the variance in the Rasch measures could be predicted reliably from the coordinates of their accounts in the LSA derived semantic space (R2 = 0.33). These coordinates also predicted NDErs age with considerable precision (R2 = 0.25). Both estimates are probably artificially low due to the small available data samples (n = 588). It appears that Rasch scalability of NDE intensity is a prerequisite for these findings, as each intensity level is associated (at least probabilistically) with a well- defined pattern of item endorsements.
Freitas, Mirlaine R; Matias, Stella V B G; Macedo, Renato L G; Freitas, Matheus P; Venturin, Nelson
2013-09-11
Two of major weeds affecting cereal crops worldwide are Avena fatua L. (wild oat) and Lolium rigidum Gaud. (rigid ryegrass). Thus, development of new herbicides against these weeds is required; in line with this, benzoxazinones, their degradation products, and analogues have been shown to be important allelochemicals and natural herbicides. Despite earlier structure-activity studies demonstrating that hydrophobicity (log P) of aminophenoxazines correlates to phytotoxicity, our findings for a series of benzoxazinone derivatives do not show any relationship between phytotoxicity and log P nor with other two usual molecular descriptors. On the other hand, a quantitative structure-activity relationship (QSAR) analysis based on molecular graphs representing structural shape, atomic sizes, and colors to encode other atomic properties performed very accurately for the prediction of phytotoxicities of these compounds against wild oat and rigid ryegrass. Therefore, these QSAR models can be used to estimate the phytotoxicity of new congeners of benzoxazinone herbicides toward A. fatua L. and L. rigidum Gaud.
Ma, Ruoshui; Zhang, Xiumei; Wang, Yi; Zhang, Xiao
2018-04-27
The heterogeneous and complex structural characteristics of lignin present a significant challenge to predict its processability (e.g. depolymerization, modifications etc) to valuable products. This study provides a detailed characterization and comparison of structural properties of seven representative biorefinery lignin samples derived from forest and agricultural residues, which were subjected to representative pretreatment methods. A range of wet chemistry and spectroscopy methods were applied to determine specific lignin structural characteristics such as functional groups, inter-unit linkages and peak molecular weight. In parallel, oxidative depolymerization of these lignin samples to either monomeric phenolic compounds or dicarboxylic acids were conducted, and the product yields were quantified. Based on these results (lignin structural characteristics and monomer yields), we demonstrated for the first time to apply multiple-variable linear estimations (MVLE) approach using R statistics to gain insight toward a quantitative correlation between lignin structural properties and their conversion reactivity toward oxidative depolymerization to monomers. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stern, K I; Malkova, T L
The objective of the present study was the development and validation of sibutramine demethylated derivatives, desmethyl sibutramine and didesmethyl sibutramine. Gas-liquid chromatography with the flame ionization detector was used for the quantitative determination of the above substances in dietary supplements. The conditions for the chromatographic determination of the analytes in the presence of the reference standard, methyl stearate, were proposed allowing to achieve the efficient separation. The method has the necessary sensitivity, specificity, linearity, accuracy, and precision (on the intra-day and inter-day basis) which suggests its good validation characteristics. The proposed method can be employed in the analytical laboratories for the quantitative determination of sibutramine derivatives in biologically active dietary supplements.
Lidar arc scan uncertainty reduction through scanning geometry optimization
NASA Astrophysics Data System (ADS)
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; Brown, Gareth.
2016-04-01
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annual energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.
Lidar arc scan uncertainty reduction through scanning geometry optimization
NASA Astrophysics Data System (ADS)
Wang, H.; Barthelmie, R. J.; Pryor, S. C.; Brown, G.
2015-10-01
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annual energy production. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation when arc scans are used for wind resource assessment.
Quantitative Ultrasound: Transition from the Laboratory to the Clinic
NASA Astrophysics Data System (ADS)
Hall, Timothy
2014-03-01
There is a long history of development and testing of quantitative methods in medical ultrasound. From the initial attempts to scan breasts with ultrasound in the early 1950's, there was a simultaneous attempt to classify tissue as benign or malignant based on the appearance of the echo signal on an oscilloscope. Since that time, there has been substantial improvement in the ultrasound systems used, the models to describe wave propagation in random media, the methods of signal detection theory, and the combination of those models and methods into parameter estimation techniques. One particularly useful measure in ultrasonics is the acoustic differential scattering cross section per unit volume in the special case of the 180° (as occurs in pulse-echo ultrasound imaging) which is known as the backscatter coefficient. The backscatter coefficient, and parameters derived from it, can be used to objectively measure quantities that are used clinically to subjectively describe ultrasound images. For example, the ``echogenicity'' (relative ultrasound image brightness) of the renal cortex is commonly compared to that of the liver. Investigating the possibility of liver disease, it is assumed the renal cortex echogenicity is normal. Investigating the kidney, it is assumed the liver echogenicity is normal. Objective measures of backscatter remove these assumptions. There is a 30-year history of accurate estimates of acoustic backscatter coefficients with laboratory systems. Twenty years ago that ability was extended to clinical imaging systems with array transducers. Recent studies involving multiple laboratories and a variety of clinical imaging systems has demonstrated system-independent estimates of acoustic backscatter coefficients in well-characterized media (agreement within about 1.5dB over about a 1-decade frequency range). Advancements that made this possible, transition of this and similar capabilities into medical practice and the prospects for quantitative image-based biomarkers will be discussed. This work was supported, in part, by NIH grants R01CA140271 and R01HD072077.
Temporal Lobe Epilepsy: Quantitative MR Volumetry in Detection of Hippocampal Atrophy
Farid, Nikdokht; Girard, Holly M.; Kemmotsu, Nobuko; Smith, Michael E.; Magda, Sebastian W.; Lim, Wei Y.; Lee, Roland R.
2012-01-01
Purpose: To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). Materials and Methods: This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration–cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Results: Quantitative MR imaging–derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%–89.5%) and specificity (92.2%–94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Conclusion: Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12112638/-/DC1 PMID:22723496
On sweat analysis for quantitative estimation of dehydration during physical exercise.
Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M
2015-08-01
Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.
Validation of a Global Ionospheric Data Assimilation Model
NASA Astrophysics Data System (ADS)
Wilson, B.; Hajj, G.; Wang, C.; Pi, X.; Rosen, I.
2003-04-01
As the number of ground and space-based receivers tracking the global positioning system (GPS) steadily increases, and the quantity of other ionospheric remote sensing data such as measurements of airglow also increases, it is becoming possible to monitor changes in the ionosphere continuously and on a global scale with unprecedented accuracy and reliability. However, in order to make effective use of such a large volume of data for both ionospheric specification and forecast, it is important to develop a data- driven ionospheric model that is consistent with the underlying physical principles governing ionosphere dynamics. A fully 3-dimensional Global Assimilative Ionosphere Model (GAIM) is currently being developed by a joint University of Southern California and Jet Propulsion Laboratory team. GAIM uses a first-principles ionospheric physics model (“forward” model) and Kalman filtering and 4DVAR techniques to not only solve for densities on a 3D grid but also estimate key driving forces which are inputs to the theoretical model, such as the ExB drift, neutral wind, and production terms. The driving forces are estimated using an “adjoint equation” to compute the required partial derivatives, thereby greatly reducing the computational demands compared to other techniques. For estimation of the grid densities, GAIM uses an approximate Kalman filter implementation in which the portions of the covariance matrix that are retained (the off-diagonal elements) are determined by assumed but physical correlation lengths in the ionosphere. By selecting how sparse or full the covariance matrix is over repeated Kalman filter runs, one can fully investigate the tradeoff between estimation accuracy and computational speed. Although GAIM will ultimately use multiple datatypes and many data sources, we have performed a first study of quantitative accuracy by ingesting GPS-derived TEC observations from ground and space-based receivers and nighttime UV radiance data from the LORAAS limb scanner on ARGOS, and then comparing the retrieved density field to independent ionospheric observations. A series of such GAIM retrievals will be presented and validated by comparisons to: vertical TEC data from the TOPEX altimeter, slant TEC data from ground GPS sites that were not included in the assimilation runs, and global ionosonde data (F0F2, HMF2, and bottom-side profiles where available). By presenting animated movies of the GAIM densities and vertical TEC maps, and their errors computed as differences from the independent observations, we will demonstrate the reasonableness and physicality of the climatology derived from the GAIM forward model, examine the consistency of the GPS and UV datatypes, and characterize the quantitative accuracy of the ionospheric “weather” specification provided by the assimilation retrievals.
NASA Astrophysics Data System (ADS)
Zib, B.; Dong, X.; Xi, B.; Kennedy, A. D.
2010-12-01
Reanalysis datasets can be an essential tool for investigating numerous climate parameters, especially in data-sparse regions like the Arctic. Where long-term continuous data is limited, reanalyses offer a resource for the recognition and analysis of change in a sensitive and complex coupled Arctic climate system. A study focused on the evaluation and intercomparison of four relatively new global reanalysis datasets will be conducted. The four new reanalyses being investigated include: (i) NASA-MERRA, (ii) NCEP-CFS, (iii) NOAA-20CR, and (iv) NCEP-DOE II. In this study, the cloud fraction and TOA radiative fluxes simulated from four reanalyses over the entire Arctic region will be compared with those derived from NASA MODIS and CERES sensors during the period 2000-2008. The surface radiative fluxes derived in each reanalysis will also be compared and validated by the BSRN surface observations during the period 1994-2008. The high latitude BSRN sites used in this study are Barrow, Alaska (BAR) and Ny Alesund, Svalbard, Norway (NYA). BSRN offers high time resolution solar and atmospheric radiation measurements from high accuracy instruments that provide a baseline for validating reanalysis estimates of surface radiation. In addition to downwelling radiation fluxes, cloud fraction from the reanalyses will also be evaluated by the Vaisala ceilometer derived cloud fraction at the Barrow, AK site. The ultimate goal of this study is to quantitatively estimate the uncertainties or biases of cloud fraction and TOA and surface radiative fluxes derived from four different recent reanalyses using highly qualified long-term surface and satellite observations as ground truth over the Arctic region.
Low, See-Wei; Pasha, Ahmed K; Howe, Carol L; Lee, Kwan S; Suryanarayana, Prakash G
2018-01-01
Background Accurate determination of right ventricular ejection fraction (RVEF) is challenging because of the unique geometry of the right ventricle. Tricuspidannular plane systolic excursion (TAPSE) and fractional area change (FAC) are commonly used echocardiographic quantitative estimates of RV function. Cardiac MRI (CMRI) has emerged as the gold standard for assessment of RVEF. We sought to summarise the available data on correlation of TAPSE and FAC with CMRI-derived RVEF and to compare their accuracy. Methods We searched PubMed, EMBASE, Web of Science, CINAHL, ClinicalTrials.gov and the Cochrane Library databases for studies that assessed the correlation of TAPSE or FAC with CMRI-derived RVEF. Data from each study selected were pooled and analysed to compare the correlation coefficient of TAPSE and FAC with CMRI-derived RVEF. Subgroup analysis was performed on patients with pulmonary hypertension. Results Analysis of data from 17 studies with a total of 1280 patients revealed that FAC had a higher correlation with CMRI-derived RVEF compared with TAPSE (0.56vs0.40, P=0.018). In patients with pulmonary hypertension, there was no statistical difference in the mean correlation coefficient of FAC and TAPSE to CMR (0.57vs0.46, P=0.16). Conclusions FAC provides a more accurate estimate of RV systolic function (RVSF) compared with TAPSE. Adoption of FAC as a routine tool for the assessment of RVSF should be considered, especially since it is also an independent predictor of morbidity and mortality. Further studies will be needed to compare other methods of echocardiographic measurement of RV function. PMID:29387425
Nishino, Jo; Kochi, Yuta; Shigemizu, Daichi; Kato, Mamoru; Ikari, Katsunori; Ochi, Hidenori; Noma, Hisashi; Matsui, Kota; Morizono, Takashi; Boroevich, Keith A.; Tsunoda, Tatsuhiko; Matsui, Shigeyuki
2018-01-01
Genome-wide association studies (GWAS) suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03)], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1). For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases. PMID:29740473
Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.
Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse
2018-05-01
Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.
Quantitative FLASH MRI at 3T using a rational approximation of the Ernst equation.
Helms, Gunther; Dathe, Henning; Dechent, Peter
2008-03-01
From the half-angle substitution of trigonometric terms in the Ernst equation, rational approximations of the flip angle dependence of the FLASH signal can be derived. Even the rational function of the lowest order was in good agreement with the experiment for flip angles up to 20 degrees . Three-dimensional maps of the signal amplitude and longitudinal relaxation rates in human brain were obtained from eight subjects by dual-angle measurements at 3T (nonselective 3D-FLASH, 7 degrees and 20 degrees flip angle, TR = 30 ms, isotropic resolution of 0.95 mm, each 7:09 min). The corresponding estimates of T1 and signal amplitude are simple algebraic expressions and deviated about 1% from the exact solution. They are ill-conditioned to estimate the local flip angle deviation but can be corrected post hoc by division of squared RF maps obtained by independent measurements. Local deviations from the nominal flip angles strongly affected the relaxation estimates and caused considerable blurring of the T1 histograms. (c) 2008 Wiley-Liss, Inc.
Variability of Kelvin wave momentum flux from high-resolution radiosonde and radio occultation data
NASA Astrophysics Data System (ADS)
Sjoberg, J. P.; Zeng, Z.; Ho, S. P.; Birner, T.; Anthes, R. A.; Johnson, R. H.
2017-12-01
Direct measurement of momentum flux from Kelvin waves in the stratosphere remains challenging. Constraining this flux from observations is an important step towards constraining the flux from models. Here we present results from analyses using linear theory to estimate the Kelvin wave amplitudes and momentum fluxes from both high-resolution radiosondes and from radio occultation (RO) data. These radiosonde data are from a contiguous 11-year span of soundings performed at two Department of Energy Atmospheric Radiation Measurement sites, while the RO data span 14 years from multiple satellite missions. Daily time series of the flux from both sources are found to be in quantitative agreement with previous studies. Climatological analyses of these data reveal the expected seasonal cycle and variability associated with the quasi-biennial oscillation. Though both data sets provide measurements on distinct spatial and temporal scales, the estimated flux from each provides insight into separate but complimentary aspects of how the Kelvin waves affect the stratosphere. Namely, flux derived from radiosonde sites provide details on the regional Kelvin wave variability, while the flux from RO data are zonal mean estimates.
Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.
Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien
2017-01-01
Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.
Lebensohn, Ricardo A.; Zecevic, Miroslav; Knezevic, Marko; ...
2015-12-15
Here, this work presents estimations of average intragranular fluctuations of lattice rotation rates in polycrystalline materials, obtained by means of the viscoplastic self-consistent (VPSC) model. These fluctuations give a tensorial measure of the trend of misorientation developing inside each single crystal grain representing a polycrystalline aggregate. We first report details of the algorithm implemented in the VPSC code to estimate these fluctuations, which are then validated by comparison with corresponding full-field calculations. Next, we present predictions of average intragranular fluctuations of lattice rotation rates for cubic aggregates, which are rationalized by comparison with experimental evidence on annealing textures of fccmore » and bcc polycrystals deformed in tension and compression, respectively, as well as with measured intragranular misorientation distributions in a Cu polycrystal deformed in tension. The orientation-dependent and micromechanically-based estimations of intragranular misorientations that can be derived from the present implementation are necessary to formulate sound sub-models for the prediction of quantitatively accurate deformation textures, grain fragmentation, and recrystallization textures using the VPSC approach.« less
Napolitano, Assunta; Akay, Seref; Mari, Angela; Bedir, Erdal; Pizza, Cosimo; Piacente, Sonia
2013-11-01
Astragalus species are widely used as health foods and dietary supplements, as well as drugs in traditional medicine. To rapidly evaluate metabolite similarities and differences among the EtOH extracts of the roots of eight commercial Astragalus spp., an approach based on direct analyses by ESI-MS followed by PCA of ESI-MS data, was carried out. Successively, quali-quantitative analyses of cycloartane derivatives in the eight Astragalus spp. by LC-ESI-MS(n) and PCA of LC-ESI-MS data were performed. This approach allowed to promptly highlighting metabolite similarities and differences among the various Astragalus spp. PCA results from LC-ESI-MS data of Astragalus samples were in reasonable agreement with both PCA results of ESI-MS data and quantitative results. This study affords an analytical method for the quali-quantitative determination of cycloartane derivatives in herbal preparations used as health and food supplements. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hincks, Ian; Granade, Christopher; Cory, David G.
2018-01-01
The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.
Application of geostatistics to risk assessment.
Thayer, William C; Griffith, Daniel A; Goodrum, Philip E; Diamond, Gary L; Hassett, James M
2003-10-01
Geostatistics offers two fundamental contributions to environmental contaminant exposure assessment: (1) a group of methods to quantitatively describe the spatial distribution of a pollutant and (2) the ability to improve estimates of the exposure point concentration by exploiting the geospatial information present in the data. The second contribution is particularly valuable when exposure estimates must be derived from small data sets, which is often the case in environmental risk assessment. This article addresses two topics related to the use of geostatistics in human and ecological risk assessments performed at hazardous waste sites: (1) the importance of assessing model assumptions when using geostatistics and (2) the use of geostatistics to improve estimates of the exposure point concentration (EPC) in the limited data scenario. The latter topic is approached here by comparing design-based estimators that are familiar to environmental risk assessors (e.g., Land's method) with geostatistics, a model-based estimator. In this report, we summarize the basics of spatial weighting of sample data, kriging, and geostatistical simulation. We then explore the two topics identified above in a case study, using soil lead concentration data from a Superfund site (a skeet and trap range). We also describe several areas where research is needed to advance the use of geostatistics in environmental risk assessment.
An Innovative Method for Estimating Soil Retention at a ...
Planning for a sustainable future should include an accounting of services currently provided by ecosystems such as erosion control. Retention of soil improves fertility, increases water retention, and decreases sedimentation in streams and rivers. Landscapes patterns that facilitate these services could help reduce costs for flood control, dredging of reservoirs and waterways, while maintaining habitat for fish and other species important to recreational and tourism industries. Landscape scale geospatial data available for the continental United States was leveraged to estimate sediment erosion (RUSLE-based, Renard, et al. 1997) employing recent geospatial techniques of sediment delivery ratio (SDR) estimation (Cavalli, et al. 2013). The approach was designed to derive a quantitative approximation of the ecological services provided by vegetative cover, management practices, and other surface features with respect to protecting soils from the erosion processes of detachment, transport, and deposition. Quantities of soil retained on the landscape and potential erosion for multiple land cover scenarios relative to current (NLCD 2011) conditions were calculated for each calendar month, and summed to yield annual estimations at a 30-meter grid cell. Continental-scale data used included MODIS NDVI data (2000-2014) to estimate monthly USLE C-factors, gridded soil survey geographic (gSSURGO) soils data (annual USLE K factor), PRISM rainfall data (monthly USLE
Smile line assessment comparing quantitative measurement and visual estimation.
Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie
2011-02-01
Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Lisi, Simonetta; Chirichella, Michele; Arisi, Ivan; Goracci, Martina; Cremisi, Federico; Cattaneo, Antonino
2017-01-01
Antibody libraries are important resources to derive antibodies to be used for a wide range of applications, from structural and functional studies to intracellular protein interference studies to developing new diagnostics and therapeutics. Whatever the goal, the key parameter for an antibody library is its complexity (also known as diversity), i.e. the number of distinct elements in the collection, which directly reflects the probability of finding in the library an antibody against a given antigen, of sufficiently high affinity. Quantitative evaluation of antibody library complexity and quality has been for a long time inadequately addressed, due to the high similarity and length of the sequences of the library. Complexity was usually inferred by the transformation efficiency and tested either by fingerprinting and/or sequencing of a few hundred random library elements. Inferring complexity from such a small sampling is, however, very rudimental and gives limited information about the real diversity, because complexity does not scale linearly with sample size. Next-generation sequencing (NGS) has opened new ways to tackle the antibody library complexity quality assessment. However, much remains to be done to fully exploit the potential of NGS for the quantitative analysis of antibody repertoires and to overcome current limitations. To obtain a more reliable antibody library complexity estimate here we show a new, PCR-free, NGS approach to sequence antibody libraries on Illumina platform, coupled to a new bioinformatic analysis and software (Diversity Estimator of Antibody Library, DEAL) that allows to reliably estimate the complexity, taking in consideration the sequencing error. PMID:28505201
Fantini, Marco; Pandolfini, Luca; Lisi, Simonetta; Chirichella, Michele; Arisi, Ivan; Terrigno, Marco; Goracci, Martina; Cremisi, Federico; Cattaneo, Antonino
2017-01-01
Antibody libraries are important resources to derive antibodies to be used for a wide range of applications, from structural and functional studies to intracellular protein interference studies to developing new diagnostics and therapeutics. Whatever the goal, the key parameter for an antibody library is its complexity (also known as diversity), i.e. the number of distinct elements in the collection, which directly reflects the probability of finding in the library an antibody against a given antigen, of sufficiently high affinity. Quantitative evaluation of antibody library complexity and quality has been for a long time inadequately addressed, due to the high similarity and length of the sequences of the library. Complexity was usually inferred by the transformation efficiency and tested either by fingerprinting and/or sequencing of a few hundred random library elements. Inferring complexity from such a small sampling is, however, very rudimental and gives limited information about the real diversity, because complexity does not scale linearly with sample size. Next-generation sequencing (NGS) has opened new ways to tackle the antibody library complexity quality assessment. However, much remains to be done to fully exploit the potential of NGS for the quantitative analysis of antibody repertoires and to overcome current limitations. To obtain a more reliable antibody library complexity estimate here we show a new, PCR-free, NGS approach to sequence antibody libraries on Illumina platform, coupled to a new bioinformatic analysis and software (Diversity Estimator of Antibody Library, DEAL) that allows to reliably estimate the complexity, taking in consideration the sequencing error.
NASA Astrophysics Data System (ADS)
Parrish, D. D.; Lamarque, J.-F.; Naik, V.; Horowitz, L.; Shindell, D. T.; Staehelin, J.; Derwent, R.; Cooper, O. R.; Tanimoto, H.; Volz-Thomas, A.; Gilge, S.; Scheel, H.-E.; Steinbacher, M.; Fröhlich, M.
2014-05-01
Two recent papers have quantified long-term ozone (O3) changes observed at northern midlatitude sites that are believed to represent baseline (here understood as representative of continental to hemispheric scales) conditions. Three chemistry-climate models (NCAR CAM-chem, GFDL-CM3, and GISS-E2-R) have calculated retrospective tropospheric O3 concentrations as part of the Atmospheric Chemistry and Climate Model Intercomparison Project and Coupled Model Intercomparison Project Phase 5 model intercomparisons. We present an approach for quantitative comparisons of model results with measurements for seasonally averaged O3 concentrations. There is considerable qualitative agreement between the measurements and the models, but there are also substantial and consistent quantitative disagreements. Most notably, models (1) overestimate absolute O3 mixing ratios, on average by 5 to 17 ppbv in the year 2000, (2) capture only 50% of O3 changes observed over the past five to six decades, and little of observed seasonal differences, and (3) capture 25 to 45% of the rate of change of the long-term changes. These disagreements are significant enough to indicate that only limited confidence can be placed on estimates of present-day radiative forcing of tropospheric O3 derived from modeled historic concentration changes and on predicted future O3 concentrations. Evidently our understanding of tropospheric O3, or the incorporation of chemistry and transport processes into current chemical climate models, is incomplete. Modeled O3 trends approximately parallel estimated trends in anthropogenic emissions of NOx, an important O3 precursor, while measured O3 changes increase more rapidly than these emission estimates.
Stable Isotope Quantitative N-Glycan Analysis by Liquid Separation Techniques and Mass Spectrometry.
Mittermayr, Stefan; Albrecht, Simone; Váradi, Csaba; Millán-Martín, Silvia; Bones, Jonathan
2017-01-01
Liquid phase separation analysis and subsequent quantitation remains a challenging task for protein-derived oligosaccharides due to their inherent structural complexity and diversity. Incomplete resolution or co-detection of multiple glycan species complicates peak area-based quantitation and associated statistical analysis when optical detection methods are used. The approach outlined herein describes the utilization of stable isotope variants of commonly used fluorescent tags that allow for mass-based glycan identification and relative quantitation following separation by liquid chromatography (LC) or capillary electrophoresis (CE). Comparability assessment of glycoprotein-derived oligosaccharides is performed by derivatization with commercially available isotope variants of 2-aminobenzoic acid or aniline and analysis by LC- and CE-mass spectrometry. Quantitative information is attained from the extracted ion chromatogram/electropherogram ratios generated from the light and heavy isotope clusters.
Schober, Eva; Werndl, Michael; Laakso, Kati; Korschineck, Irina; Sivonen, Kaarina; Kurmayer, Rainer
2011-01-01
Summary The application of quantitative real time PCR has been proposed for the quantification of toxic genotypes of cyanobacteria. We have compared the Taq Nuclease Assay (TNA) in quantifying the toxic cyanobacteria Microcystis sp. via the intergenic spacer region of the phycocyanin operon (PC) and mcyB indicative of the production of the toxic heptapeptide microcystin between three research groups employing three instruments (ABI7300, GeneAmp5700, ABI7500). The estimates of mcyB genotypes were compared using (i) DNA of a mcyB containing strain and a non-mcyB containing strain supplied in different mixtures across a low range of variation (0-10% of mcyB) and across a high range of variation (20-100%), and (ii) DNA from field samples containing Microcystis sp. For all three instruments highly significant linear regression curves between the proportion of the mcyB containing strain and the percentage of mcyB genotypes both within the low range and within the high range of mcyB variation were obtained. The regression curves derived from the three instruments differed in slope and within the high range of mcyB variation mcyB proportions were either underestimated (0-50%) or overestimated (0-72%). For field samples cell numbers estimated via both TNAs as well as mcyB proportions showed significant linear relationships between the instruments. For all instruments a linear relationship between the cell numbers estimated as PC genotypes and the cell numbers estimated as mcyB genotypes was observed. The proportions of mcyB varied from 2-28% and did not differ between the instruments. It is concluded that the TNA is able to provide quantitative estimates on mcyB genotype numbers that are reproducible between research groups and is useful to follow variation in mcyB genotype proportion occurring within weeks to months. PMID:17258828
NASA Technical Reports Server (NTRS)
Bungo, M. W.; Johnson, P. C., Jr.
1983-01-01
During the first four flights of the Space Shuttle, cardiovascular data were obtained on each crewmember as part of the operational medicine requirements for crew health and safety. From monitoring blood pressure and electrocardiographic data, it was possible to estimate the degree of deconditioning imposed by exposure to the microgravity environment. For this purpose, a quantitative cardiovascular index of deconditioning (CID) was derived to aid the clinician in his assessment. Isotonic saline was then investigated as a countermeasure against orthostatic intolerance and found to be effective in partially reversing the hemodynamic consequences. It was observed that the space flight environment of reentry might potentially be arrhythmogenic in at least one individual.
Assessing the robustness of quantitative fatty acid signature analysis to assumption violations
Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.
2016-01-01
In most QFASA applications, investigators will generally have some knowledge of the prey available to predators and be able to assess the completeness of prey signature data and sample additional prey as necessary. Conversely, because calibration coefficients are derived from feeding trials with captive animals and their values may be sensitive to consumer physiology and nutritional status, their applicability to free-ranging animals is difficult to establish. We therefore recommend that investigators first make any improvements to the prey signature data that seem warranted and then base estimation on the Aitchison distance measure, as it appears to minimize risk from violations of the assumption that is most difficult to verify.
NASA Astrophysics Data System (ADS)
Roman, H. E.; Porto, M.; Dose, C.
2008-10-01
We analyze daily log-returns data for a set of 1200 stocks, taken from US stock markets, over a period of 2481 trading days (January 1996-November 2005). We estimate the degree of non-stationarity in daily market volatility employing a polynomial fit, used as a detrending function. We find that the autocorrelation function of absolute detrended log-returns departs strongly from the corresponding original data autocorrelation function, while the observed leverage effect depends only weakly on trends. Such effect is shown to occur when both skewness and long-time memory are simultaneously present. A fractional derivative random walk model is discussed yielding a quantitative agreement with the empirical results.
NASA Astrophysics Data System (ADS)
Böttger, U.; Waser, R.
2017-07-01
The existence of non-ferroelectric regions in ferroelectric thin films evokes depolarization effects leading to a tilt of the P(E) hysteresis loop. The analysis of measured hysteresis of lead zirconate titanate (PZT) thin films is used to determine a depolarization factor which contains quantitative information about interfacial layers as well as ferroelectrically passive zones in the bulk. The derived interfacial capacitance is smaller than that estimated from conventional extrapolation techniques. In addition, the concept of depolarization is used for the investigation of fatigue behavior of PZT thin films indicating that the mechanism of seed inhibition, which is responsible for the effect, occurs in the entire film.
Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.
Deboeck, Pascal R
2010-08-06
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.
Statistically Self-Consistent and Accurate Errors for SuperDARN Data
NASA Astrophysics Data System (ADS)
Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.
2018-01-01
The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.
Plant leaf chlorophyll content retrieval based on a field imaging spectroscopy system.
Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin
2014-10-23
A field imaging spectrometer system (FISS; 380-870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%-35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector.
Plant Leaf Chlorophyll Content Retrieval Based on a Field Imaging Spectroscopy System
Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin
2014-01-01
A field imaging spectrometer system (FISS; 380–870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%–35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector. PMID:25341439
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-24
... this location in 2008. No quantitative estimate of the size of this remaining population is available... observed in 1998. No quantitative estimates of the size of the extant populations are available. Howarth...
Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations
2010-11-01
from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property
EasyFRAP-web: a web-based tool for the analysis of fluorescence recovery after photobleaching data.
Koulouras, Grigorios; Panagopoulos, Andreas; Rapsomaniki, Maria A; Giakoumakis, Nickolaos N; Taraviras, Stavros; Lygerou, Zoi
2018-06-13
Understanding protein dynamics is crucial in order to elucidate protein function and interactions. Advances in modern microscopy facilitate the exploration of the mobility of fluorescently tagged proteins within living cells. Fluorescence recovery after photobleaching (FRAP) is an increasingly popular functional live-cell imaging technique which enables the study of the dynamic properties of proteins at a single-cell level. As an increasing number of labs generate FRAP datasets, there is a need for fast, interactive and user-friendly applications that analyze the resulting data. Here we present easyFRAP-web, a web application that simplifies the qualitative and quantitative analysis of FRAP datasets. EasyFRAP-web permits quick analysis of FRAP datasets through an intuitive web interface with interconnected analysis steps (experimental data assessment, different types of normalization and estimation of curve-derived quantitative parameters). In addition, easyFRAP-web provides dynamic and interactive data visualization and data and figure export for further analysis after every step. We test easyFRAP-web by analyzing FRAP datasets capturing the mobility of the cell cycle regulator Cdt2 in the presence and absence of DNA damage in cultured cells. We show that easyFRAP-web yields results consistent with previous studies and highlights cell-to-cell heterogeneity in the estimated kinetic parameters. EasyFRAP-web is platform-independent and is freely accessible at: https://easyfrap.vmnet.upatras.gr/.
Bromaghin, Jeffrey F.; Lance, Monique M.; Elliott, Elizabeth W.; Jeffries, Steven J.; Acevedo-Gutiérrez, Alejandro; Kennish, John M.
2013-01-01
Harbor seals (Phoca vitulina) are an abundant predator along the west coast of North America, and there is considerable interest in their diet composition, especially in regard to predation on valued fish stocks. Available information on harbor seal diets, primarily derived from scat analysis, suggests that adult salmon (Oncorhynchus spp.), Pacific Herring (Clupea pallasii), and gadids predominate. Because diet assessments based on scat analysis may be biased, we investigated diet composition through quantitative analysis of fatty acid signatures. Blubber samples from 49 harbor seals captured in western North America from haul-outs within the area of the San Juan Islands and southern Strait of Georgia in the Salish Sea were analyzed for fatty acid composition, along with 269 fish and squid specimens representing 27 potential prey classes. Diet estimates varied spatially, demographically, and among individual harbor seals. Findings confirmed the prevalence of previously identified prey species in harbor seal diets, but other species also contributed significantly. In particular, Black (Sebastes melanops) and Yellowtail (S. flavidus) Rockfish were estimated to compose up to 50% of some individual seal diets. Specialization and high predation rates on Black and Yellowtail Rockfish by a subset of harbor seals may play a role in the population dynamics of these regional rockfish stocks that is greater than previously realized.
Wallace, Jack
2010-05-01
While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.
Nagasaka, Tatsuhiro; Kunishi, Tomohiro; Sotome, Hikaru; Koga, Masafumi; Morimoto, Masakazu; Irie, Masahiro; Miyasaka, Hiroshi
2018-06-07
The one- and two-photon cycloreversion reactions of a fluorescent diarylethene derivative with oxidized benzothiophene moieties were investigated by means of ultrafast laser spectroscopy. Femtosecond transient absorption spectroscopy under the one-photon excitation condition revealed that the excited closed-ring isomer is simply deactivated into the initial ground state with a time constant of 2.6 ns without remarkable cycloreversion, the results of which are consistent with the very low cycloreversion reaction yield (<10-5) under steady-state light irradiation. On the other hand, an efficient cycloreversion reaction was observed under irradiation with a picosecond laser pulse at 532 nm. The excitation intensity dependence of the cycloreversion reaction indicates that a highly excited state attained by the stepwise two-photon absorption is responsible for the marked increase of the cycloreversion reaction, and the quantum yield at the highly excited state was estimated to be 0.018 from quantitative analysis, indicating that the reaction is enhanced by a factor of >1800.
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario
2017-10-26
The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.
Niwa, Masahiro; Hiraishi, Yasuhiro
2014-01-30
Tablets are the most common form of solid oral dosage produced by pharmaceutical industries. There are several challenges to successful and consistent tablet manufacturing. One well-known quality issue is visible surface defects, which generally occur due to insufficient physical strength, causing breakage or abrasion during processing, packaging, or shipping. Techniques that allow quantitative evaluation of surface strength and the risk of surface defect would greatly aid in quality control. Here terahertz pulsed imaging (TPI) was employed to evaluate the surface properties of core tablets with visible surface defects of varying severity after film coating. Other analytical methods, such as tensile strength measurements, friability testing, and scanning electron microscopy (SEM), were used to validate TPI results. Tensile strength and friability provided no information on visible surface defect risk, whereas the TPI-derived unique parameter terahertz electric field peak strength (TEFPS) provided spatial distribution of surface density/roughness information on core tablets, which helped in estimating tablet abrasion risk prior to film coating and predicting the location of the defects. TPI also revealed the relationship between surface strength and blending condition and is a nondestructive, quantitative approach to aid formulation development and quality control that can reduce visible surface defect risk in tablets. Copyright © 2013 Elsevier B.V. All rights reserved.
Harrison, J D; Muirhead, C R
2003-01-01
To compare quantitative estimates of lifetime cancer risk in humans for exposures to internally deposited radionuclides and external radiation. To assess the possibility that risks from radionuclide exposures may be underestimated. Risk estimates following internal exposures can be made for a small number of alpha-particle-emitting nuclides. (1) Lung cancer in underground miners exposed by inhalation to radon-222 gas and its short-lived progeny. Studies of residential (222)Rn exposure are generally consistent with predictions from the miner studies. (2) Liver cancer and leukaemia in patients given intravascular injections of Thorotrast, a thorium-232 oxide preparation that concentrates in liver, spleen and bone marrow. (3) Bone cancer in patients given injections of radium-224, and in workers exposed occupationally to (226)Ra and (228)Ra, mainly by ingestion. (4) Lung cancer in Mayak workers exposed to plutonium-239, mainly by inhalation. Liver and bone cancers were also seen, but the dosimetry is not yet sufficiently good enough to provide quantitative estimates of risks. Comparisons can be made between risk estimates for radiation-induced cancer derived for radionuclide exposure and those derived for the A-bomb survivors, exposed mainly to low-LET (linear energy transfer) external radiation. Data from animal studies, using dogs and rodents, allow comparisons of cancer induction by a range of alpha- and beta-/gamma-emitting radionuclides. They provide information on relative biological effectiveness (RBE), dose-response relationships, dose-rate effects and the location of target cells for different malignancies. For lung and liver cancer, the estimated values of risk per Sv for internal exposure, assuming an RBE for alpha-particles of 20, are reasonably consistent with estimates for external exposure to low-LET radiation. This also applies to bone cancer when risk is calculated on the basis of average bone dose, but consideration of dose to target cells on bone surfaces suggests a low RBE for alpha-particles. Similarly, for leukaemia, the comparison of risks from alpha-irradiation ((232)Th and progeny) and external radiation suggest a low alpha RBE; this conclusion is supported by animal data. Risk estimates for internal exposure are dependent on the assumptions made in calculating dose. Account is taken of the distribution of radionuclides within tissues and the distribution of target cells for cancer induction. For the lungs and liver, the available human and animal data provide support for current assumptions. However, for bone cancer and leukaemia, it may be that changes are required. Bone cancer risk may be best assessed by calculating dose to a 50 micro m layer of marrow adjacent to endosteal (inner) bone surfaces rather than to a single 10 micro m cell layer as currently assumed. Target cells for leukaemia may be concentrated towards the centre of marrow cavities so that the risk of leukaemia from bone-seeking radionuclides, particularly alpha emitters, may be overestimated by the current assumption of uniform distribution of target cells throughout red bone marrow. The lifetime risk estimates considered here for exposure to internally deposited radionuclides and to external radiation are subject to uncertainties, arising from the dosimetric assumptions made, from the quality of cancer incidence and mortality data and from aspects of risk modelling; including variations in baseline rates between populations for some cancer types. Bearing in mind such uncertainties, comparisons of risk estimates for internal emitters and external radiation show good agreement for lung and liver cancers. For leukaemia, the available data suggest that the assumption of an alpha-particle RBE of 20 can result in overestimates of risk. For bone cancer, it also appears that current assumptions will overestimate risks from alpha-particle-emitting nuclides, particularly at low doses.
Improving Marine Ecosystem Models with Biochemical Tracers
NASA Astrophysics Data System (ADS)
Pethybridge, Heidi R.; Choy, C. Anela; Polovina, Jeffrey J.; Fulton, Elizabeth A.
2018-01-01
Empirical data on food web dynamics and predator-prey interactions underpin ecosystem models, which are increasingly used to support strategic management of marine resources. These data have traditionally derived from stomach content analysis, but new and complementary forms of ecological data are increasingly available from biochemical tracer techniques. Extensive opportunities exist to improve the empirical robustness of ecosystem models through the incorporation of biochemical tracer data and derived indices, an area that is rapidly expanding because of advances in analytical developments and sophisticated statistical techniques. Here, we explore the trophic information required by ecosystem model frameworks (species, individual, and size based) and match them to the most commonly used biochemical tracers (bulk tissue and compound-specific stable isotopes, fatty acids, and trace elements). Key quantitative parameters derived from biochemical tracers include estimates of diet composition, niche width, and trophic position. Biochemical tracers also provide powerful insight into the spatial and temporal variability of food web structure and the characterization of dominant basal and microbial food web groups. A major challenge in incorporating biochemical tracer data into ecosystem models is scale and data type mismatches, which can be overcome with greater knowledge exchange and numerical approaches that transform, integrate, and visualize data.
Liberato, D J; Byers, V S; Dennick, R G; Castagnoli, N
1981-01-01
Attempts to characterize potential biologically important covalent interactions between electrophilic quinones derived from catechols present in poison oak/ivy (urushiol) and biomacromolecules have led to the analysis of model reactions involving sulfur and amino nucleophiles with 3-heptadecylbenzoquinone. Characterization of the reaction products indicates that this quinone undergoes regiospecific attack by (S)-N-acetylcysteine at C-6 and by 1-aminopentane at C-5. The red solid obtained with 1-aminopentane proved to be 3-heptadecyl-5-(pentylamino)-1,2-benzoquinone. Analogous aminobenzoquinones were obtained with the quinones derived from the 4- and 6-methyl analogues of 3-pentadecylcatechol. All three adducts absorbed visible light at different wavelengths. When the starting catechols were incubated with human serum albumin almost identical chromophores were formed. These results establish that cathechols responsible for the production of the poison oak/ivy contact dermatitis in humans undergo a sequence of reactions in the presence of human serum albumin that lead to covalent attachment of the catechols to the protein via carbon-nitrogen bonds. Estimations of the extent of this binding indicate that, at least with human serum albumin, the reaction is quantitative.
Pharmacokinetic Steady-States Highlight Interesting Target-Mediated Disposition Properties.
Gabrielsson, Johan; Peletier, Lambertus A
2017-05-01
In this paper, we derive explicit expressions for the concentrations of ligand L, target R and ligand-target complex RL at steady state for the classical model describing target-mediated drug disposition, in the presence of a constant-rate infusion of ligand. We demonstrate that graphing the steady-state values of ligand, target and ligand-target complex, we obtain striking and often singular patterns, which yield a great deal of insight and understanding about the underlying processes. Deriving explicit expressions for the dependence of L, R and RL on the infusion rate, and displaying graphs of the relations between L, R and RL, we give qualitative and quantitive information for the experimentalist about the processes involved. Understanding target turnover is pivotal for optimising these processes when target-mediated drug disposition (TMDD) prevails. By a combination of mathematical analysis and simulations, we also show that the evolution of the three concentration profiles towards their respective steady-states can be quite complex, especially for lower infusion rates. We also show how parameter estimates obtained from iv bolus studies can be used to derive steady-state concentrations of ligand, target and complex. The latter may serve as a template for future experimental designs.
Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine
1999-01-01
Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...
NASA Astrophysics Data System (ADS)
Bartholomeus, H.; Kooistra, L.
2012-04-01
For quantitative estimation of soil properties by means of remote sensing, often hyperspectral data are used. But these data are scarce and expensive, which prohibits wider implementation of the developed techniques in agricultural management. For precision agriculture, observations at a high spatial resolution are required. Colour aerial photographs at this scale are widely available, and can be acquired at no of very low costs. Therefore, we investigated whether publically available aerial photographs can be used to a) automatically delineate management zones and b) estimate levels of organic carbon spatially. We selected three study areas within the Netherlands that cover a large variance in soil type (peat, sand, and clay). For the fields of interest, RGB aerial photographs with a spatial resolution of 50 cm were extracted from a publically available data provider. Further pre-processing exists of geo-referencing only. Since the images originate from different sources and are potentially acquired under unknown illumination conditions, the exact radiometric properties of the data are unknown. Therefore, we used spectral indices to emphasize the differences in reflectance and normalize for differences in radiometry. To delineate management zones we used image segmentation techniques, using the derived indices as input. Comparison with management zone maps as used by the farmers shows that there is good correspondence. Regression analysis between a number of soil properties and the derived indices shows that organic carbon is the major explanatory variable for differences in index values within the fields. However, relations do not hold for large regions, indicating that local models will have to be used, which is a problem that is also still relevant for hyperspectral remote sensing data. With this research, we show that low-cost aerial photographs can be a valuable tool for quantitative analysis of organic carbon and automatic delineation of management zones. Since a lot of data are publically available this offers great possibilities for implementing remote sensing techniques in agricultural management.
NASA Astrophysics Data System (ADS)
Xiong, Guoming; Cumming, Paul; Todica, Andrei; Hacker, Marcus; Bartenstein, Peter; Böning, Guido
2012-12-01
Absolute quantitation of the cerebral metabolic rate for glucose (CMRglc) can be obtained in positron emission tomography (PET) studies when serial measurements of the arterial [18F]-fluoro-deoxyglucose (FDG) input are available. Since this is not always practical in PET studies of rodents, there has been considerable interest in defining an image-derived input function (IDIF) by placing a volume of interest (VOI) within the left ventricle of the heart. However, spill-in arising from trapping of FDG in the myocardium often leads to progressive contamination of the IDIF, which propagates to underestimation of the magnitude of CMRglc. We therefore developed a novel, non-invasive method for correcting the IDIF without scaling to a blood sample. To this end, we first obtained serial arterial samples and dynamic FDG-PET data of the head and heart in a group of eight anaesthetized rats. We fitted a bi-exponential function to the serial measurements of the IDIF, and then used the linear graphical Gjedde-Patlak method to describe the accumulation in myocardium. We next estimated the magnitude of myocardial spill-in reaching the left ventricle VOI by assuming a Gaussian point-spread function, and corrected the measured IDIF for this estimated spill-in. Finally, we calculated parametric maps of CMRglc using the corrected IDIF, and for the sake of comparison, relative to serial blood sampling from the femoral artery. The uncorrected IDIF resulted in 20% underestimation of the magnitude of CMRglc relative to the gold standard arterial input method. However, there was no bias with the corrected IDIF, which was robust to the variable extent of myocardial tracer uptake, such that there was a very high correlation between individual CMRglc measurements using the corrected IDIF with gold-standard arterial input results. Based on simulation, we furthermore find that electrocardiogram-gating, i.e. ECG-gating is not necessary for IDIF quantitation using our approach.
Pirat, Bahar; Little, Stephen H.; Igo, Stephen R.; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J.; Zoghbi, William A.
2012-01-01
Objective The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. Methods We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2π r2, and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA × aliasing velocity × time velocity integral of AR/peak AR velocity. Results Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 ± 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Conclusion Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption. PMID:19168322
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
Lidar arc scan uncertainty reduction through scanning geometry optimization
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; ...
2016-04-13
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annualmore » energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30% of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. As a result, large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.« less
Lidar arc scan uncertainty reduction through scanning geometry optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annualmore » energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30% of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. As a result, large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.« less
Natsch, Andreas; Emter, Roger; Haupt, Tina; Ellis, Graham
2018-06-01
Cosmetic regulations prohibit animal testing for the purpose of safety assessment and recent REACH guidance states that the local lymph node assay (LLNA) in mice shall only be conducted if in vitro data cannot give sufficient information for classification and labelling. However, Quantitative Risk Assessment (QRA) for fragrance ingredients requires a NESIL, a dose not expected to cause induction of skin sensitization in humans. In absence of human data, this is derived from the LLNA and it remains a key challenge for risk assessors to derive this value from non-animal data. Here we present a workflow using structural information, reactivity data and KeratinoSens results to predict a LLNA result as a point of departure. Specific additional tests (metabolic activation, complementary reactivity tests) are applied in selected cases depending on the chemical domain of a molecule. Finally, in vitro and in vivo data on close analogues are used to estimate uncertainty of the prediction in the specific chemical domain. This approach was applied to three molecules which were subsequently tested in the LLNA and 22 molecules with available and sometimes discordant human and LLNA data. Four additional case studies illustrate how this approach is being applied to recently developed molecules in the absence of animal data. Estimation of uncertainty and how this can be applied to determine a final NESIL for risk assessment is discussed. We conclude that, in the data-rich domain of fragrance ingredients, sensitization risk assessment without animal testing is possible in most cases by this integrated approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirenko, Oksana, E-mail: oksana.sirenko@moldev.com; Cromwell, Evan F., E-mail: evan.cromwell@moldev.com; Crittenden, Carole
2013-12-15
Human induced pluripotent stem cell (iPSC)-derived cardiomyocytes show promise for screening during early drug development. Here, we tested a hypothesis that in vitro assessment of multiple cardiomyocyte physiological parameters enables predictive and mechanistically-interpretable evaluation of cardiotoxicity in a high-throughput format. Human iPSC-derived cardiomyocytes were exposed for 30 min or 24 h to 131 drugs, positive (107) and negative (24) for in vivo cardiotoxicity, in up to 6 concentrations (3 nM to 30 uM) in 384-well plates. Fast kinetic imaging was used to monitor changes in cardiomyocyte function using intracellular Ca{sup 2+} flux readouts synchronous with beating, and cell viability. Amore » number of physiological parameters of cardiomyocyte beating, such as beat rate, peak shape (amplitude, width, raise, decay, etc.) and regularity were collected using automated data analysis. Concentration–response profiles were evaluated using logistic modeling to derive a benchmark concentration (BMC) point-of-departure value, based on one standard deviation departure from the estimated baseline in vehicle (0.3% dimethyl sulfoxide)-treated cells. BMC values were used for cardiotoxicity classification and ranking of compounds. Beat rate and several peak shape parameters were found to be good predictors, while cell viability had poor classification accuracy. In addition, we applied the Toxicological Prioritization Index (ToxPi) approach to integrate and display data across many collected parameters, to derive “cardiosafety” ranking of tested compounds. Multi-parameter screening of beating profiles allows for cardiotoxicity risk assessment and identification of specific patterns defining mechanism-specific effects. These data and analysis methods may be used widely for compound screening and early safety evaluation in drug development. - Highlights: • Induced pluripotent stem cell-derived cardiomyocytes are promising in vitro models. • We tested if evaluation of cardiotoxicity is possible in a high-throughput format. • The assay shows benefits of automated data integration across multiple parameters. • Quantitative assessment of concentration–response is possible using iPSCs. • Multi-parametric screening allows for cardiotoxicity risk assessment.« less
NASA Astrophysics Data System (ADS)
Healy, R. M.; Sciare, J.; Poulain, L.; Crippa, M.; Wiedensohler, A.; Prévôt, A. S. H.; Baltensperger, U.; Sarda-Estève, R.; McGuire, M. L.; Jeong, C.-H.; McGillicuddy, E.; O'Connor, I. P.; Sodeau, J. R.; Evans, G. J.; Wenger, J. C.
2013-04-01
Single particle mixing state information can be a powerful tool for assessing the relative impact of local and regional sources of ambient particulate matter in urban environments. However, quantitative mixing state data are challenging to obtain using single particle mass spectrometers. In this study, the quantitative chemical composition of carbonaceous single particles has been estimated using an aerosol time-of-flight mass spectrometer (ATOFMS) as part of the MEGAPOLI 2010 winter campaign in Paris, France. Relative peak areas of marker ions for elemental carbon (EC), organic aerosol (OA), ammonium, nitrate, sulphate and potassium were compared with concurrent measurements from an Aerodyne high resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS), a thermal/optical OCEC analyser and a particle into liquid sampler coupled with ion chromatography (PILS-IC). ATOFMS-derived mass concentrations reproduced the variability of these species well (R2 = 0.67-0.78), and ten discrete mixing states for carbonaceous particles were identified and quantified. Potassium content was used to identify particles associated with biomass combustion. The chemical mixing state of HR-ToF-AMS organic aerosol factors, resolved using positive matrix factorization, was also investigated through comparison with the ATOFMS dataset. The results indicate that hydrocarbon-like OA (HOA) detected in Paris is associated with two EC-rich mixing states which differ in their relative sulphate content, while fresh biomass burning OA (BBOA) is associated with two mixing states which differ significantly in their OA/EC ratios. Aged biomass burning OA (OOA2-BBOA) was found to be significantly internally mixed with nitrate, while secondary, oxidized OA (OOA) was associated with five particle mixing states, each exhibiting different relative secondary inorganic ion content. Externally mixed secondary organic aerosol was not observed. These findings demonstrate the heterogeneity of primary and secondary organic aerosol mixing states in Paris. Examination of the temporal behaviour and chemical composition of the ATOFMS classes also enabled estimation of the relative contribution of transported emissions of each chemical species and total particle mass in the size range investigated. Only 22% of the total ATOFMS-derived particle mass was apportioned to fresh, local emissions, with 78% apportioned to regional/continental scale emissions.
NASA Astrophysics Data System (ADS)
Bösmeier, Annette; Glaser, Rüdiger; Stahl, Kerstin; Himmelsbach, Iso; Schönbein, Johannes
2017-04-01
Future estimations of flood hazard and risk for developing optimal coping and adaption strategies inevitably include considerations of the frequency and magnitude of past events. Methods of historical climatology represent one way of assessing flood occurrences beyond the period of instrumental measurements and can thereby substantially help to extend the view into the past and to improve modern risk analysis. Such historical information can be of additional value and has been used in statistical approaches like Bayesian flood frequency analyses during recent years. However, the derivation of quantitative values from vague descriptive information of historical sources remains a crucial challenge. We explored possibilities of parametrization of descriptive flood related data specifically for the assessment of historical floods in a framework that combines a hermeneutical approach with mathematical and statistical methods. This study forms part of the transnational, Franco-German research project TRANSRISK2 (2014 - 2017), funded by ANR and DFG, with the focus on exploring the floods history of the last 300 years for the regions of Upper and Middle Rhine. A broad data base of flood events had been compiled, dating back to AD 1500. The events had been classified based on hermeneutical methods, depending on intensity, spatial dimension, temporal structure, damages and mitigation measures associated with the specific events. This indexed database allowed the exploration of a link between descriptive data and quantitative information for the overlapping time period of classified floods and instrumental measurements since the end of the 19th century. Thereby, flood peak discharges as a quantitative measure of the severity of a flood were used to assess the discharge intervals for flood classes (upper and lower thresholds) within different time intervals for validating the flood classification, as well as examining the trend in the perception threshold over time. Furthermore, within a suitable time period, flood classes and other quantifiable indicators of flood intensity (number of damaged locations mentioned in historical sources, general availability of reports associated with a specific event) were combined with available peak discharges measurements. We argue that this information can be considered as 'expert knowledge' and used it to develop a fuzzy rule based model for deriving peak discharge estimates of pre-instrumental events that can finally be introduced into a flood frequency analysis.
Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations
NASA Astrophysics Data System (ADS)
Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong
2017-08-01
Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.
NASA Astrophysics Data System (ADS)
Sepúlveda, J.; Hoyos Ortiz, C. D.
2017-12-01
An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic compared to observed data in spite of the many sources of uncertainty including the sampling volume, the different physical principles of the sensors, the incomplete understanding of the microphysics of precipitation and, the most important, the rapidly varying droplet size distribution.
Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions
Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...
A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY
We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...
Quantification of Microbial Phenotypes
Martínez, Verónica S.; Krömer, Jens O.
2016-01-01
Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694
Well-to-refinery emissions and net-energy analysis of China's crude-oil supply
NASA Astrophysics Data System (ADS)
Masnadi, Mohammad S.; El-Houjeiri, Hassan M.; Schunack, Dominik; Li, Yunpo; Roberts, Samori O.; Przesmitzki, Steven; Brandt, Adam R.; Wang, Michael
2018-03-01
Oil is China's second-largest energy source, so it is essential to understand the country's greenhouse gas emissions from crude-oil production. Chinese crude supply is sourced from numerous major global petroleum producers. Here, we use a per-barrel well-to-refinery life-cycle analysis model with data derived from hundreds of public and commercial sources to model the Chinese crude mix and the upstream carbon intensities and energetic productivity of China's crude supply. We generate a carbon-denominated supply curve representing Chinese crude-oil supply from 146 oilfields in 20 countries. The selected fields are estimated to emit between 1.5 and 46.9 g CO2eq MJ-1 of oil, with volume-weighted average emissions of 8.4 g CO2eq MJ-1. These estimates are higher than some existing databases, illustrating the importance of bottom-up models to support life-cycle analysis databases. This study provides quantitative insight into China's energy policy and the economic and environmental implications of China's oil consumption.
Dewaraja, Yuni K.; Frey, Eric C.; Sgouros, George; Brill, A. Bertrand; Roberson, Peter; Zanzonico, Pat B.; Ljungberg, Michael
2012-01-01
In internal radionuclide therapy, a growing interest in voxel-level estimates of tissue-absorbed dose has been driven by the desire to report radiobiologic quantities that account for the biologic consequences of both spatial and temporal nonuniformities in these dose estimates. This report presents an overview of 3-dimensional SPECT methods and requirements for internal dosimetry at both regional and voxel levels. Combined SPECT/CT image-based methods are emphasized, because the CT-derived anatomic information allows one to address multiple technical factors that affect SPECT quantification while facilitating the patient-specific voxel-level dosimetry calculation itself. SPECT imaging and reconstruction techniques for quantification in radionuclide therapy are not necessarily the same as those designed to optimize diagnostic imaging quality. The current overview is intended as an introduction to an upcoming series of MIRD pamphlets with detailed radionuclide-specific recommendations intended to provide best-practice SPECT quantification–based guidance for radionuclide dosimetry. PMID:22743252
Braun, Sabine; Schindler, Christian; Leuzinger, Sebastian
2010-09-01
For a quantitative estimate of the ozone effect on vegetation reliable models for ozone uptake through the stomata are needed. Because of the analogy of ozone uptake and transpiration it is possible to utilize measurements of water loss such as sap flow for quantification of ozone uptake. This technique was applied in three beech (Fagus sylvatica) stands in Switzerland. A canopy conductance was calculated from sap flow velocity and normalized to values between 0 and 1. It represents mainly stomatal conductance as the boundary layer resistance in forests is usually small. Based on this relative conductance, stomatal functions to describe the dependence on light, temperature, vapour pressure deficit and soil moisture were derived using multivariate nonlinear regression. These functions were validated by comparison with conductance values directly estimated from sap flow. The results corroborate the current flux parameterization for beech used in the DO3SE model. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Marker-Assisted Introgression in Backcross Breeding Programs
Visscher, P. M.; Haley, C. S.; Thompson, R.
1996-01-01
The efficiency of marker-assisted introgression in backcross populations derived from inbred lines was investigated by simulation. Background genotypes were simulated assuming that a genetic model of many genes of small effects in coupling phase explains the observed breed difference and variance in backcross populations. Markers were efficient in introgression backcross programs for simultaneously introgressing an allele and selecting for the desired genomic background. Using a marker spacing of 10-20 cM gave an advantage of one to two backcross generations selection relative to random or phenotypic selection. When the position of the gene to be introgressed is uncertain, for example because its position was estimated from a trait gene mapping experiment, a chromosome segment should be introgressed that is likely to include the allele of interest. Even for relatively precisely mapped quantitative trait loci, flanking markers or marker haplotypes should cover ~10-20 cM around the estimated position of the gene, to ensure that the allele frequency does not decline in later backcross generations. PMID:8978075
NASA Astrophysics Data System (ADS)
Kim, Ho Sung
2013-12-01
A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.
Wu, Pei-Hsin; Cheng, Cheng-Chieh; Wu, Ming-Long; Chao, Tzu-Cheng; Chung, Hsiao-Wen; Huang, Teng-Yi
2014-01-01
The dual echo steady-state (DESS) sequence has been shown successful in achieving fast T2 mapping with good precision. Under-estimation of T2, however, becomes increasingly prominent as the flip angle decreases. In 3D DESS imaging, therefore, the derived T2 values would become a function of the slice location in the presence of non-ideal slice profile of the excitation RF pulse. Furthermore, the pattern of slice-dependent variation in T2 estimates is dependent on the RF pulse waveform. Multi-slice 2D DESS imaging provides better inter-slice consistency, but the signal intensity is subject to integrated effects of within-slice distribution of the actual flip angle. Consequently, T2 measured using 2D DESS is prone to inaccuracy even at the designated flip angle of 90°. In this study, both phantom and human experiments demonstrate the above phenomena in good agreement with model prediction. © 2013.
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
NASA Astrophysics Data System (ADS)
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
[Exploring novel hyperspectral band and key index for leaf nitrogen accumulation in wheat].
Yao, Xia; Zhu, Yan; Feng, Wei; Tian, Yong-Chao; Cao, Wei-Xing
2009-08-01
The objectives of the present study were to explore new sensitive spectral bands and ratio spectral indices based on precise analysis of ground-based hyperspectral information, and then develop regression model for estimating leaf N accumulation per unit soil area (LNA) in winter wheat (Triticum aestivum L.). Three field experiments were conducted with different N rates and cultivar types in three consecutive growing seasons, and time-course measurements were taken on canopy hyperspectral reflectance and LNA tinder the various treatments. By adopting the method of reduced precise sampling, the detailed ratio spectral indices (RSI) within the range of 350-2 500 nm were constructed, and the quantitative relationships between LNA (gN m(-2)) and RSI (i, j) were analyzed. It was found that several key spectral bands and spectral indices were suitable for estimating LNA in wheat, and the spectral parameter RSI (990, 720) was the most reliable indicator for LNA in wheat. The regression model based on the best RSI was formulated as y = 5.095x - 6.040, with R2 of 0.814. From testing of the derived equations with independent experiment data, the model on RSI (990, 720) had R2 of 0.847 and RRMSE of 24.7%. Thus, it is concluded that the present hyperspectral parameter of RSI (990, 720) and derived regression model can be reliably used for estimating LNA in winter wheat. These results provide the feasible key bands and technical basis for developing the portable instrument of monitoring wheat nitrogen status and for extracting useful spectral information from remote sensing images.
APPLICATION OF RADIOISOTOPES TO THE QUANTITATIVE CHROMATOGRAPHY OF FATTY ACIDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budzynski, A.Z.; Zubrzycki, Z.J.; Campbell, I.G.
1959-10-31
The paper reports work done on the use of I/sup 131/, Zn/sup 65/, Sr/sup 90/, Zr/sup 95/, Ce/sup 144/ for the quantitative estimation of fatty acids on paper chromatograms, and for determination of the degree of usaturation of components of resolved fatty acid mixtures. I/sup 131/ is used to iodinate unsaturated fatty acids, and the amount of such acids is determined from the radiochromatogram. The degree of unsaturation of fatty acids is determined by estimation of the specific activiiy of spots. The other isotopes have been examined from the point of view of their suitability for estimation of total amountsmore » of fatty acids by formation of insoluble radioactive soaps held on the chromatogram. In particular, work is reported on the quantitative estimation of saturated fatty acids by measurement of the activity of their insoluble soaps with radioactive metals. Various quantitative relationships are described between amount of fatty acid in spot and such parameters as radiometrically estimated spot length, width, maximum intensity, and integrated spot activity. A convenient detection apparatus for taking radiochromatograms is also described. In conjunction with conventional chromatographic methods for resolving fatty acids the method permits the estimation of composition of fatty acid mixtures obtained from biological material. (auth)« less
Feared consequences of panic attacks in panic disorder: a qualitative and quantitative analysis.
Raffa, Susan D; White, Kamila S; Barlow, David H
2004-01-01
Cognitions are hypothesized to play a central role in panic disorder (PD). Previous studies have used questionnaires to assess cognitive content, focusing on prototypical cognitions associated with PD; however, few studies have qualitatively examined cognitions associated with the feared consequences of panic attacks. The purpose of this study was to conduct a qualitative and quantitative analysis of feared consequences of panic attacks. The initial, qualitative analysis resulted in the development of 32 categories of feared consequences. The categories were derived from participant responses to a standardized, semi-structured question (n = 207). Five expert-derived categories were then utilized to quantitatively examine the relationship between cognitions and indicators of PD severity. Cognitions did not predict PD severity; however, correlational analyses indicated some predictive validity to the expert-derived categories. The qualitative analysis identified additional areas of patient-reported concern not included in previous research that may be important in the assessment and treatment of PD.
Lippi, Vittorio; Mergner, Thomas
2017-01-01
The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking.
NASA Astrophysics Data System (ADS)
Carrer, Dominique; Ceamanos, Xavier; Moparthy, Suman; Six, Bruno; Roujean, Jean-Louis; Descloitres, Jacques
2017-04-01
The major difficulty to detect properly the aerosol signal by using remote sensing observations in the visible range relies on a clear separation of the scattering components between the atmospheric layer and the ground surface. This turns to be quite challenging over bright targets like deserts. We propose a method that combines the directional and temporal dimensions of the satellite signal through out the use of a semi-empirical BRDF kernel-driven model of the surface/atmosphere coupled system. As a result, a simultaneous retrieval of surface albedo and aerosol properties (optical thickness) is performed. The method proves to be meaningful to track anthropogenic aerosol emissions in the troposphere, to monitor volcanic ash release and above all to estimate dust events over bright targets. The proposed method is applied to MSG/SEVIRI slots in the three spectral bands (VIS, NIR, SWIR) at the frequency of 15min and for a geographic coverage that encompasses Europe, Africa, and South America regions. The SEVIRI-derived optical aerosol depth (AOD) estimates compare favourably with measurements carried on over well distributed AERONET stations. The comparison with state of art MODIS-derived (Moderate Resolution Imaging Spectro-radiometer), and MISR-derived (Multi-angle Imaging Spectro-Radiometer) AOD products falls within 20% of accuracy while it reveals the capability of AERUS-GEO to depict more aerosol events still quantitatively. Owing to that, more AOD products offers new insights to better estimate the aerosol radiative forcing (ARF) from GEO compared to low-orbit elevation orbit (LEO) satellite data. The AERUS-GEO algorithm was implemented in the ICARE/AERIS Data Center based in Lille (France) (http://www.icare.univ-lille1.fr). It disseminates operationally from 2014 a daily AOD product (AERUS-GEO) at 670 nm over the MSG disk. In addition to the NRT AOD product, a long term reprocessing is also available over the last decade.
Lippi, Vittorio; Mergner, Thomas
2017-01-01
The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking. PMID:28951719
Hawkins, Charles P
2006-08-01
Water resources managers and conservation biologists need reliable, quantitative, and directly comparable methods for assessing the biological integrity of the world's aquatic ecosystems. Large-scale assessments are constrained by the lack of consistency in the indicators used to assess biological integrity and our current inability to translate between indicators. In theory, assessments based on estimates of taxonomic completeness, i.e., the proportion of expected taxa that were observed (observed/expected, O/E) are directly comparable to one another and should therefore allow regionally and globally consistent summaries of the biological integrity of freshwater ecosystems. However, we know little about the true comparability of O/E assessments derived from different data sets or how well O/E assessments perform relative to other indicators in use. I compared the performance (precision, bias, and sensitivity to stressors) of O/E assessments based on five different data sets with the performance of the indicators previously applied to these data (three multimetric indices, a biotic index, and a hybrid method used by the state of Maine). Analyses were based on data collected from U.S. stream ecosystems in North Carolina, the Mid-Atlantic Highlands, Maine, and Ohio. O/E assessments resulted in very similar estimates of mean regional conditions compared with most other indicators once these indicators' values were standardized relative to reference-site means. However, other indicators tended to be biased estimators of O/E, a consequence of differences in their response to natural environmental gradients and sensitivity to stressors. These results imply that, in some cases, it may be possible to compare assessments derived from different indicators by standardizing their values (a statistical approach to data harmonization). In situations where it is difficult to standardize or otherwise harmonize two or more indicators, O/E values can easily be derived from existing raw sample data. With some caveats, O/E should provide more directly comparable assessments of biological integrity across regions than is possible by harmonizing values of a mix of indicators.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Bispectral infrared forest fire detection and analysis using classification techniques
NASA Astrophysics Data System (ADS)
Aranda, Jose M.; Melendez, Juan; de Castro, Antonio J.; Lopez, Fernando
2004-01-01
Infrared cameras are well established as a useful tool for fire detection, but their use for quantitative forest fire measurements faces difficulties, due to the complex spatial and spectral structure of fires. In this work it is shown that some of these difficulties can be overcome by applying classification techniques, a standard tool for the analysis of satellite multispectral images, to bi-spectral images of fires. Images were acquired by two cameras that operate in the medium infrared (MIR) and thermal infrared (TIR) bands. They provide simultaneous and co-registered images, calibrated in brightness temperatures. The MIR-TIR scatterplot of these images can be used to classify the scene into different fire regions (background, ashes, and several ember and flame regions). It is shown that classification makes possible to obtain quantitative measurements of physical fire parameters like rate of spread, embers temperature, and radiated power in the MIR and TIR bands. An estimation of total radiated power and heat release per unit area is also made and compared with values derived from heat of combustion and fuel consumption.
Wisnieff, Cynthia; Ramanan, Sriram; Olesik, John; Gauthier, Susan; Wang, Yi; Pitt, David
2014-01-01
Purpose Within multiple sclerosis (MS) lesions iron is present in chronically activated microglia. Thus, iron detection with MRI might provide a biomarker for chronic inflammation within lesions. Here, we examine contributions of iron and myelin to magnetic susceptibility of lesions on quantitative susceptibility mapping (QSM). Methods Fixed MS brain tissue was assessed with MRI including gradient echo data, which was processed to generate field (phase), R2* and QSM. Five lesions were sectioned and evaluated by immunohistochemistry for presence of myelin, iron and microglia/macrophages. Two of the lesions had an elemental analysis for iron concentration mapping, and their phospholipid content was estimated from the difference in the iron and QSM data. Results Three of the five lesions had substantial iron deposition that was associated with microglia and positive susceptibility values. For the two lesions with elemental analysis, the QSM derived phospholipid content maps were consistent with myelin labeled histology. Conclusion Positive susceptibility values with respect to water indicate the presence of iron in MS lesions, though both demyelination and iron deposition contribute to QSM. PMID:25137340
Derivation and evaluation of a labeled hedonic scale.
Lim, Juyun; Wood, Alison; Green, Barry G
2009-11-01
The objective of this study was to develop a semantically labeled hedonic scale (LHS) that would yield ratio-level data on the magnitude of liking/disliking of sensation equivalent to that produced by magnitude estimation (ME). The LHS was constructed by having 49 subjects who were trained in ME rate the semantic magnitudes of 10 common hedonic descriptors within a broad context of imagined hedonic experiences that included tastes and flavors. The resulting bipolar scale is statistically symmetrical around neutral and has a unique semantic structure. The LHS was evaluated quantitatively by comparing it with ME and the 9-point hedonic scale. The LHS yielded nearly identical ratings to those obtained using ME, which implies that its semantic labels are valid and that it produces ratio-level data equivalent to ME. Analyses of variance conducted on the hedonic ratings from the LHS and the 9-point scale gave similar results, but the LHS showed much greater resistance to ceiling effects and yielded normally distributed data, whereas the 9-point scale did not. These results indicate that the LHS has significant semantic, quantitative, and statistical advantages over the 9-point hedonic scale.
Analytical-Based Partial Volume Recovery in Mouse Heart Imaging
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; deKemp, Robert A.
2011-02-01
Positron emission tomography (PET) is a powerful imaging modality that has the ability to yield quantitative images of tracer activity. Physical phenomena such as photon scatter, photon attenuation, random coincidences and spatial resolution limit quantification potential and must be corrected to preserve the accuracy of reconstructed images. This study focuses on correcting the partial volume effects that arise in mouse heart imaging when resolution is insufficient to resolve the true tracer distribution in the myocardium. The correction algorithm is based on fitting 1D profiles through the myocardium in gated PET images to derive myocardial contours along with blood, background and myocardial activity. This information is interpolated onto a 2D grid and convolved with the tomograph's point spread function to derive regional recovery coefficients enabling partial volume correction. The point spread function was measured by placing a line source inside a small animal PET scanner. PET simulations were created based on noise properties measured from a reconstructed PET image and on the digital MOBY phantom. The algorithm can estimate the myocardial activity to within 5% of the truth when different wall thicknesses, backgrounds and noise properties are encountered that are typical of healthy FDG mouse scans. The method also significantly improves partial volume recovery in simulated infarcted tissue. The algorithm offers a practical solution to the partial volume problem without the need for co-registered anatomic images and offers a basis for improved quantitative 3D heart imaging.
NASA Astrophysics Data System (ADS)
Hoover, Herbert L.; Marsaud, Serge G.
1986-05-01
Tinted ophthalmic lenses are used primarily for eye comfort in a brightly lit environment. An ancillary benefit is the attenuation of ultraviolet radiation. Some national product standards specify quantitative limits for ultraviolet transmittances. Such limits ought to be founded on quantitative estimates of solar irradiances of ocular tissues, with actinic effectiveness taken into account. We use the equations of Green and coworkers for direct and diffuse solar irradiance at the earth's surface to calculate average sky and ground spectral radiances. We use the geometric factors derived by us for the coupling of radiation from these sources to the human cornea. Actinically weighted corneal spectral irradiances integrated over wavelength and time yield peak irradiances and accumulated exposure doses that are compared with recommended exposure limits. This provides the maximal effective ultraviolet transmittances of tinted ophthalmic lenses such that these exposure limits will not be exceeded in the selected exposure environment. The influences on corneal irradiation of such exposure parameters as solar zenith angle, altitude of the exposure site, characteristics of atmospheric aerosols, and ground reflectances are illustrated. The relationships between the effective transmittance (which is a function of the environmental radiation and any actinicweighting function) and readily determined characteristics of the lens itself, viz., its mean transmittance, and a selected spectral transmittance, are derived for three lens transmittance curves. Limits of lens transmittance for the UV-B and UV-A wavelength regions are presented for several representative exposure sites in Europe and the U.S.A.
Myers, Matthew R; Giridhar, Dushyanth
2011-06-01
In the characterization of high-intensity focused ultrasound (HIFU) systems, it is desirable to know the intensity field within a tissue phantom. Infrared (IR) thermography is a potentially useful method for inferring this intensity field from the heating pattern within the phantom. However, IR measurements require an air layer between the phantom and the camera, making inferences about the thermal field in the absence of the air complicated. For example, convection currents can arise in the air layer and distort the measurements relative to the phantom-only situation. Quantitative predictions of intensity fields based upon IR temperature data are also complicated by axial and radial diffusion of heat. In this paper, mathematical expressions are derived for use with IR temperature data acquired at times long enough that noise is a relatively small fraction of the temperature trace, but small enough that convection currents have not yet developed. The relations were applied to simulated IR data sets derived from computed pressure and temperature fields. The simulation was performed in a finite-element geometry involving a HIFU transducer sonicating upward in a phantom toward an air interface, with an IR camera mounted atop an air layer, looking down at the heated interface. It was found that, when compared to the intensity field determined directly from acoustic propagation simulations, intensity profiles could be obtained from the simulated IR temperature data with an accuracy of better than 10%, at pre-focal, focal, and post-focal locations. © 2011 Acoustical Society of America
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
Quantitative endoscopy: initial accuracy measurements.
Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P
2000-02-01
The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.
Nayan, Nazrul Anuar; Risman, Nur Sabrina; Jaafar, Rosmina
2016-07-27
Among vital signs of acutely ill hospital patients, respiratory rate (RR) is a highly accurate predictor of health deterioration. This study proposes a system that consists of a passive and non-invasive single-lead electrocardiogram (ECG) acquisition module and an ECG-derived respiratory (EDR) algorithm in the working prototype of a mobile application. Before estimating RR that produces the EDR rate, ECG signals were evaluated based on the signal quality index (SQI). The SQI algorithm was validated quantitatively using the PhysioNet/Computing in Cardiology Challenge 2011 training data set. The RR extraction algorithm was validated by adopting 40 MIT PhysioNet Multiparameter Intelligent Monitoring in Intensive Care II data set. The estimated RR showed a mean absolute error (MAE) of 1.4 compared with the ``gold standard'' RR. The proposed system was used to record 20 ECGs of healthy subjects and obtained the estimated RR with MAE of 0.7 bpm. Results indicate that the proposed hardware and algorithm could replace the manual counting method, uncomfortable nasal airflow sensor, chest band, and impedance pneumotachography often used in hospitals. The system also takes advantage of the prevalence of smartphone usage and increase the monitoring frequency of the current ECG of patients with critical illnesses.
NASA Astrophysics Data System (ADS)
Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe
2017-10-01
Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.
Kim, Hyung Chul; Wallington, Timothy J; Sullivan, John L; Keoleian, Gregory A
2015-08-18
Lightweighting is a key strategy to improve vehicle fuel economy. Assessing the life-cycle benefits of lightweighting requires a quantitative description of the use-phase fuel consumption reduction associated with mass reduction. We present novel methods of estimating mass-induced fuel consumption (MIF) and fuel reduction values (FRVs) from fuel economy and dynamometer test data in the U.S. Environmental Protection Agency (EPA) database. In the past, FRVs have been measured using experimental testing. We demonstrate that FRVs can be mathematically derived from coast down coefficients in the EPA vehicle test database avoiding additional testing. MIF and FRVs calculated for 83 different 2013 MY vehicles are in the ranges 0.22-0.43 and 0.15-0.26 L/(100 km 100 kg), respectively, and increase to 0.27-0.53 L/(100 km 100 kg) with powertrain resizing to retain equivalent vehicle performance. We show how use-phase fuel consumption can be estimated using MIF and FRVs in life cycle assessments (LCAs) of vehicle lightweighting from total vehicle and vehicle component perspectives with, and without, powertrain resizing. The mass-induced fuel consumption model is illustrated by estimating lifecycle greenhouse gas (GHG) emission benefits from lightweighting a grille opening reinforcement component using magnesium or carbon fiber composite for 83 different vehicle models.
NASA Technical Reports Server (NTRS)
Parrish, D. D.; Lamarque, J.-F.; Naik, V.; Horowitz, L.; Shindell, D. T.; Staehelin, J.; Derwent, R.; Cooper, O. R.; Tanimoto, H.; Volz-Thomas, A.;
2014-01-01
Two recent papers have quantified long-term ozone (O3) changes observed at northernmidlatitude sites that are believed to represent baseline (here understood as representative of continental to hemispheric scales) conditions. Three chemistry-climate models (NCAR CAM-chem, GFDL-CM3, and GISS-E2-R) have calculated retrospective tropospheric O3 concentrations as part of the Atmospheric Chemistry and Climate Model Intercomparison Project and Coupled Model Intercomparison Project Phase 5 model intercomparisons. We present an approach for quantitative comparisons of model results with measurements for seasonally averaged O3 concentrations. There is considerable qualitative agreement between the measurements and the models, but there are also substantial and consistent quantitative disagreements. Most notably, models (1) overestimate absolute O3 mixing ratios, on average by approximately 5 to 17 ppbv in the year 2000, (2) capture only approximately 50% of O3 changes observed over the past five to six decades, and little of observed seasonal differences, and (3) capture approximately 25 to 45% of the rate of change of the long-term changes. These disagreements are significant enough to indicate that only limited confidence can be placed on estimates of present-day radiative forcing of tropospheric O3 derived from modeled historic concentration changes and on predicted future O3 concentrations. Evidently our understanding of tropospheric O3, or the incorporation of chemistry and transport processes into current chemical climate models, is incomplete. Modeled O3 trends approximately parallel estimated trends in anthropogenic emissions of NO(sub x), an important O3 precursor, while measured O3 changes increase more rapidly than these emission estimates.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
NASA Astrophysics Data System (ADS)
Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.
2015-02-01
In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).
Improvement of Reliability of Diffusion Tensor Metrics in Thigh Skeletal Muscles.
Keller, Sarah; Chhabra, Avneesh; Ahmed, Shaheen; Kim, Anne C; Chia, Jonathan M; Yamamura, Jin; Wang, Zhiyue J
2018-05-01
Quantitative diffusion tensor imaging (DTI) of skeletal muscles is challenging due to the bias in DTI metrics, such as fractional anisotropy (FA) and mean diffusivity (MD), related to insufficient signal-to-noise ratio (SNR). This study compares the bias of DTI metrics in skeletal muscles via pixel-based and region-of-interest (ROI)-based analysis. DTI of the thigh muscles was conducted on a 3.0-T system in N = 11 volunteers using a fat-suppressed single-shot spin-echo echo planar imaging (SS SE-EPI) sequence with eight repetitions (number of signal averages (NSA) = 4 or 8 for each repeat). The SNR was calculated for different NSAs and estimated for the composite images combining all data (effective NSA = 48) as standard reference. The bias of MD and FA derived by pixel-based and ROI-based quantification were compared at different NSAs. An "intra-ROI diffusion direction dispersion angle (IRDDDA)" was calculated to assess the uniformity of diffusion within the ROI. Using our standard reference image with NSA = 48, the ROI-based and pixel-based measurements agreed for FA and MD. Larger disagreements were observed for the pixel-based quantification at NSA = 4. MD was less sensitive than FA to the noise level. The IRDDDA decreased with higher NSA. At NSA = 4, ROI-based FA showed a lower average bias (0.9% vs. 37.4%) and narrower 95% limits of agreement compared to the pixel-based method. The ROI-based estimation of FA is less prone to bias than the pixel-based estimations when SNR is low. The IRDDDA can be applied as a quantitative quality measure to assess reliability of ROI-based DTI metrics. Copyright © 2018 Elsevier B.V. All rights reserved.
A Molecular Clock Infers Heterogeneous Tissue Age Among Patients with Barrett’s Esophagus
Wong, Chao-Jen; Hazelton, William D.; Kaz, Andrew M.; Willis, Joseph E.; Grady, William M.; Luebeck, E. Georg
2016-01-01
Biomarkers that drift differentially with age between normal and premalignant tissues, such as Barrett’s esophagus (BE), have the potential to improve the assessment of a patient’s cancer risk by providing quantitative information about how long a patient has lived with the precursor (i.e., dwell time). In the case of BE, which is a metaplastic precursor to esophageal adenocarcinoma (EAC), such biomarkers would be particularly useful because EAC risk may change with BE dwell time and it is generally not known how long a patient has lived with BE when a patient is first diagnosed with this condition. In this study we first describe a statistical analysis of DNA methylation data (both cross-sectional and longitudinal) derived from tissue samples from 50 BE patients to identify and validate a set of 67 CpG dinucleotides in 51 CpG islands that undergo age-related methylomic drift. Next, we describe how this information can be used to estimate a patient’s BE dwell time. We introduce a Bayesian model that incorporates longitudinal methylomic drift rates, patient age, and methylation data from individually paired BE and normal squamous tissue samples to estimate patient-specific BE onset times. Our application of the model to 30 sporadic BE patients’ methylomic profiles first exposes a wide heterogeneity in patient-specific BE onset times. Furthermore, independent application of this method to a cohort of 22 familial BE (FBE) patients reveals significantly earlier mean BE onset times. Our analysis supports the conjecture that differential methylomic drift occurs in BE (relative to normal squamous tissue) and hence allows quantitative estimation of the time that a BE patient has lived with BE. PMID:27168458
Extracting and Converting Quantitative Data into Human Error Probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuan Q. Tran; Ronald L. Boring; Jeffrey C. Joe
2007-08-01
This paper discusses a proposed method using a combination of advanced statistical approaches (e.g., meta-analysis, regression, structural equation modeling) that will not only convert different empirical results into a common metric for scaling individual PSFs effects, but will also examine the complex interrelationships among PSFs. Furthermore, the paper discusses how the derived statistical estimates (i.e., effect sizes) can be mapped onto a HRA method (e.g. SPAR-H) to generate HEPs that can then be use in probabilistic risk assessment (PRA). The paper concludes with a discussion of the benefits of using academic literature in assisting HRA analysts in generating sound HEPsmore » and HRA developers in validating current HRA models and formulating new HRA models.« less
NASA Astrophysics Data System (ADS)
Klein, P.; Hirth, M.; Gröber, S.; Kuhn, J.; Müller, A.
2014-07-01
Smartphones and tablets are used as experimental tools and for quantitative measurements in two traditional laboratory experiments for undergraduate physics courses. The Doppler effect is analyzed and the speed of sound is determined with an accuracy of about 5% using ultrasonic frequency and two smartphones, which serve as rotating sound emitter and stationary sound detector. Emphasis is put on the investigation of measurement errors in order to judge experimentally derived results and to sensitize undergraduate students to the methods of error estimates. The distance dependence of the illuminance of a light bulb is investigated using an ambient light sensor of a mobile device. Satisfactory results indicate that the spectrum of possible smartphone experiments goes well beyond those already published for mechanics.
Linear elastic properties derivation from microstructures representative of transport parameters.
Hoang, Minh Tan; Bonnet, Guy; Tuan Luu, Hoang; Perrot, Camille
2014-06-01
It is shown that three-dimensional periodic unit cells (3D PUC) representative of transport parameters involved in the description of long wavelength acoustic wave propagation and dissipation through real foam samples may also be used as a standpoint to estimate their macroscopic linear elastic properties. Application of the model yields quantitative agreement between numerical homogenization results, available literature data, and experiments. Key contributions of this work include recognizing the importance of membranes and properties of the base material for the physics of elasticity. The results of this paper demonstrate that a 3D PUC may be used to understand and predict not only the sound absorbing properties of porous materials but also their transmission loss, which is critical for sound insulation problems.
NASA Technical Reports Server (NTRS)
Pratt, J. R.
1981-01-01
Eight glycidyl amines were prepared by alkylating the parent amine with epichlorohydrin to form chlorohydrin, followed by cyclization with aqueous NaOH. Three of these compounds contained propargyl groups with postcuring studies. A procedure for quantitatively estimating the epoxy content of these glycidyl amines was employed for purity determination. Two diamond carbonates and several model propargly compounds were prepared. The synthesis of three new diamines, two which contain propargyloxy groups, and another with a sec-butyl group is in progress. These materials are at the dinitro stage ready for the final hydrogenation step. Four aromatic diamines were synthesized for mutagenic testing purposes. One of these compounds rapidly decomposes on exposure to air.
NASA Technical Reports Server (NTRS)
Tomaine, R. L.
1976-01-01
Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.
Rhee, Chang-Hoon; Shin, Sang Min; Choi, Yong-Seok; Yamaguchi, Tetsutaro; Maki, Koutaro; Kim, Yong-Il; Kim, Seong-Sik; Park, Soo-Byung; Son, Woo-Sung
2015-12-01
From computed tomographic images, the dentocentral synchondrosis can be identified in the second cervical vertebra. This can demarcate the border between the odontoid process and the body of the 2nd cervical vertebra and serve as a good model for the prediction of bone and forensic age. Nevertheless, until now, there has been no application of the 2nd cervical vertebra based on the dentocentral synchondrosis. In this study, statistical shape analysis was used to build bone and forensic age estimation regression models. Following the principles of statistical shape analysis and principal components analysis, we used cone-beam computed tomography (CBCT) to evaluate a Japanese population (35 males and 45 females, from 5 to 19 years old). The narrowest prediction intervals among the multivariate regression models were 19.63 for bone age and 2.99 for forensic age. There was no significant difference between form space and shape space in the bone and forensic age estimation models. However, for gender comparison, the bone and forensic age estimation models for males had the higher explanatory power. This study derived an improved objective and quantitative method for bone and forensic age estimation based on only the 2nd, 3rd and 4th cervical vertebral shapes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Sjoberg, Jeremiah P.; Birner, Thomas; Johnson, Richard H.
2017-07-26
Observational estimates of Kelvin wave momentum fluxes in the tropical lower stratosphere remain challenging. Here we extend a method based on linear wave theory to estimate daily time series of these momentum fluxes from high-resolution radiosonde data. Daily time series are produced for sounding sites operated by the US Department of Energy (DOE) and from the recent Dynamics of the Madden–Julian Oscillation (DYNAMO) field campaign. Our momentum flux estimates are found to be robust to different data sources and processing and in quantitative agreement with estimates from prior studies. Testing the sensitivity to vertical resolution, our estimated momentum fluxes aremore » found to be most sensitive to vertical resolution greater than 1 km, largely due to overestimation of the vertical wavelength. Climatological analysis is performed over a selected 11-year span of data from DOE Atmospheric Radiation Measurement (ARM) radiosonde sites. Analyses of this 11-year span of data reveal the expected seasonal cycle of momentum flux maxima in boreal winter and minima in boreal summer, and variability associated with the quasi-biennial oscillation of maxima during easterly phase and minima during westerly phase. Comparison between periods with active convection that is either strongly or weakly associated with the Madden–Julian Oscillation (MJO) suggests that the MJO provides a nontrivial increase in the lowermost stratospheric momentum fluxes.« less
Power-law modeling based on least-squares minimization criteria.
Hernández-Bermejo, B; Fairén, V; Sorribas, A
1999-10-01
The power-law formalism has been successfully used as a modeling tool in many applications. The resulting models, either as Generalized Mass Action or as S-systems models, allow one to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. The power-law formalism was first derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The especial characteristics of this approximation produce an extremely useful systemic representation that allows a complete system characterization. Furthermore, their parameters have a precise interpretation as local sensitivities of each of the individual processes and as rate-constants. This facilitates a qualitative discussion and a quantitative estimation of their possible values in relation to the kinetic properties. Following this interpretation, parameter estimation is also possible by relating the systemic behavior to the underlying processes. Without leaving the general formalism, in this paper we suggest deriving the power-law representation in an alternative way that uses least-squares minimization. The resulting power-law mimics the target rate-law in a wider range of concentration values than the classical power-law. Although the implications of this alternative approach remain to be established, our results show that the predicted steady-state using the least-squares power-law is closest to the actual steady-state of the target system.
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
Catanuto, Giuseppe; Taher, Wafa; Rocco, Nicola; Catalano, Francesca; Allegra, Dario; Milotta, Filippo Luigi Maria; Stanco, Filippo; Gallo, Giovanni; Nava, Maurizio Bruno
2018-03-20
Breast shape is defined utilizing mainly qualitative assessment (full, flat, ptotic) or estimates, such as volume or distances between reference points, that cannot describe it reliably. We will quantitatively describe breast shape with two parameters derived from a statistical methodology denominated principal component analysis (PCA). We created a heterogeneous dataset of breast shapes acquired with a commercial infrared 3-dimensional scanner on which PCA was performed. We plotted on a Cartesian plane the two highest values of PCA for each breast (principal components 1 and 2). Testing of the methodology on a preoperative and postoperative surgical case and test-retest was performed by two operators. The first two principal components derived from PCA are able to characterize the shape of the breast included in the dataset. The test-retest demonstrated that different operators are able to obtain very similar values of PCA. The system is also able to identify major changes in the preoperative and postoperative stages of a two-stage reconstruction. Even minor changes were correctly detected by the system. This methodology can reliably describe the shape of a breast. An expert operator and a newly trained operator can reach similar results in a test/re-testing validation. Once developed and after further validation, this methodology could be employed as a good tool for outcome evaluation, auditing, and benchmarking.
Dong, Yi; Wang, Wen-Ping; Lin, Pan; Fan, Peili; Mao, Feng
2016-01-01
We performed a prospective study to evaluate the value of contrast-enhanced ultrasound (CEUS) in quantitative evaluation of renal cortex perfusion in patients suspected of early diabetic nephropathies (DN), with the estimated GFR (MDRD equation) as the gold standard. The study protocol was approved by the hospital review board; each patient gave written informed consent. Our study included 46 cases (21 males and 25 females, mean age 55.6 ± 4.14 years) of clinical confirmed early DN patients. After intravenous bolus injection of 1 ml sulfur hexafluoride microbubbles of ultrasound contrast agent, real time CEUS of renal cortex was performed successively using a 2-5 MHz convex probe. Time-intensity curves (TICs) and quantitative indexes were created with Qlab software. Receiver operating characteristic (ROC) curves were used to predict the diagnostic criteria of CEUS quantitative indexes, and their diagnostic efficiencies were compared with resistance index (RI) and peak systolic velocity (PSV) of renal segmental arteries by chi square test. Our control group included forty-five healthy volunteers. Difference was considered statistically significant with P < 0.05. Changes of area under curve (AUC), derived peak intensity (DPI) were statistically significant (P < 0.05). DPI less than 12 and AUC greater than 1400 had high utility in DN, with 71.7% and 67.3% sensitivity, 77.8% and 80.0% specificity. These results were significantly better than those obtained with RI and PSV which had no significant difference in early stage of DN (P > 0.05). CEUS might be helpful to improve early diagnosis of DN by quantitative analyses. AUC and DPI might be valuable quantitative indexes.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... the Draft Guidance of Applying Quantitative Data To Develop Data-Derived Extrapolation Factors for.... SUMMARY: EPA is announcing that Eastern Research Group, Inc. (ERG), a contractor to the EPA, will convene an independent panel of experts to review the draft document, ``Guidance for Applying Quantitative...
Quantitative Verse in a Quantity-Insensitive Language: Baif's "vers mesures."
ERIC Educational Resources Information Center
Bullock, Barbara E.
1997-01-01
Analysis of the quantitative metrical verse of French Renaissance poet Jean-Antoine de Baif finds that the metrics, often seen as unscannable and using an incomprehensible phonetic orthography, derive largely from a system that is accentual, with the orthography permitting the poet to encode quantitative distinctions that coincide with the meter.…
Quantitative imaging methods in osteoporosis.
Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G
2016-12-01
Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.
Use of a web-based dietary assessment tool in early pregnancy.
Mullaney, L; O'Higgins, A C; Cawley, S; Kennedy, R; McCartney, D; Turner, M J
2016-05-01
Maternal diet is critical to fetal development and lifelong health outcomes. In this context, dietary quality indices in pregnancy should be explicitly underpinned by data correlating food intake patterns with nutrient intakes known to be important for gestation. Our aim was to assess the correlation between dietary quality scores derived from a novel online dietary assessment tool (DAT) and nutrient intake data derived from the previously validated Willett Food Frequency Questionnaire (WFFQ). 524 women completed the validated semi-quantitive WFFQ and online DAT questionnaire in their first trimester. Spearman correlation and Kruskal-Wallis tests were used to test associations between energy-adjusted and energy-unadjusted nutrient intakes derived from the WFFQ, and diet and nutrition scores obtained from the DAT. Positive correlations were observed between respondents' diet and nutrition scores derived from the online DAT, and their folate, vitamin B12, iron, calcium, zinc and iodine intakes/MJ of energy consumed derived from the WFFQ (all P < 0.001). Negative correlations were observed between participants' diet and nutrition scores and their total energy intake (P = 0.02), and their percentage energy from fat, saturated fat, and non-milk extrinsic sugars (NMES) (all P ≤ 0.001). Median dietary fibre, beta carotene, folate, vitamin C and vitamin D intakes derived from the WFFQ, generally increased across quartiles of diet and nutrition score (all P < 0.001). Scores generated by this web-based DAT correlate with important nutrient intakes in pregnancy, supporting its use in estimating overall dietary quality among obstetric populations.
qPIPSA: Relating enzymatic kinetic parameters and interaction fields
Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C
2007-01-01
Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319
Hemming, Victoria; Walshe, Terry V; Hanea, Anca M; Fidler, Fiona; Burgman, Mark A
2018-01-01
Natural resource management uses expert judgement to estimate facts that inform important decisions. Unfortunately, expert judgement is often derived by informal and largely untested protocols, despite evidence that the quality of judgements can be improved with structured approaches. We attribute the lack of uptake of structured protocols to the dearth of illustrative examples that demonstrate how they can be applied within pressing time and resource constraints, while also improving judgements. In this paper, we demonstrate how the IDEA protocol for structured expert elicitation may be deployed to overcome operational challenges while improving the quality of judgements. The protocol was applied to the estimation of 14 future abiotic and biotic events on the Great Barrier Reef, Australia. Seventy-six participants with varying levels of expertise related to the Great Barrier Reef were recruited and allocated randomly to eight groups. Each participant provided their judgements using the four-step question format of the IDEA protocol ('Investigate', 'Discuss', 'Estimate', 'Aggregate') through remote elicitation. When the events were realised, the participant judgements were scored in terms of accuracy, calibration and informativeness. The results demonstrate that the IDEA protocol provides a practical, cost-effective, and repeatable approach to the elicitation of quantitative estimates and uncertainty via remote elicitation. We emphasise that i) the aggregation of diverse individual judgements into pooled group judgments almost always outperformed individuals, and ii) use of a modified Delphi approach helped to remove linguistic ambiguity, and further improved individual and group judgements. Importantly, the protocol encourages review, critical appraisal and replication, each of which is required if judgements are to be used in place of data in a scientific context. The results add to the growing body of literature that demonstrates the merit of using structured elicitation protocols. We urge decision-makers and analysts to use insights and examples to improve the evidence base of expert judgement in natural resource management.
Estimation of methanogen biomass via quantitation of coenzyme M
Elias, Dwayne A.; Krumholz, Lee R.; Tanner, Ralph S.; Suflita, Joseph M.
1999-01-01
Determination of the role of methanogenic bacteria in an anaerobic ecosystem often requires quantitation of the organisms. Because of the extreme oxygen sensitivity of these organisms and the inherent limitations of cultural techniques, an accurate biomass value is very difficult to obtain. We standardized a simple method for estimating methanogen biomass in a variety of environmental matrices. In this procedure we used the thiol biomarker coenzyme M (CoM) (2-mercaptoethanesulfonic acid), which is known to be present in all methanogenic bacteria. A high-performance liquid chromatography-based method for detecting thiols in pore water (A. Vairavamurthy and M. Mopper, Anal. Chim. Acta 78:363–370, 1990) was modified in order to quantify CoM in pure cultures, sediments, and sewage water samples. The identity of the CoM derivative was verified by using liquid chromatography-mass spectroscopy. The assay was linear for CoM amounts ranging from 2 to 2,000 pmol, and the detection limit was 2 pmol of CoM/ml of sample. CoM was not adsorbed to sediments. The methanogens tested contained an average of 19.5 nmol of CoM/mg of protein and 0.39 ± 0.07 fmol of CoM/cell. Environmental samples contained an average of 0.41 ± 0.17 fmol/cell based on most-probable-number estimates. CoM was extracted by using 1% tri-(N)-butylphosphine in isopropanol. More than 90% of the CoM was recovered from pure cultures and environmental samples. We observed no interference from sediments in the CoM recovery process, and the method could be completed aerobically within 3 h. Freezing sediment samples resulted in 46 to 83% decreases in the amounts of detectable CoM, whereas freezing had no effect on the amounts of CoM determined in pure cultures. The method described here provides a quick and relatively simple way to estimate methanogenic biomass.
Uncertainty of quantitative microbiological methods of pharmaceutical analysis.
Gunar, O V; Sakhno, N G
2015-12-30
The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli
2015-01-01
In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.
Choi, Seo Yeon; Yang, Nuri; Jeon, Soo Kyung; Yoon, Tae Hyun
2014-09-01
In this study, we have demonstrated feasibility of a semi-quantitative approach for the estimation of cellular SiO2 nanoparticles (NPs), which is based on the flow cytometry measurements of their normalized side scattering intensity. In order to improve our understanding on the quantitative aspects of cell-nanoparticle interactions, flow cytometry, transmission electron microscopy, and X-ray fluorescence experiments were carefully performed for the HeLa cells exposed to SiO2 NPs with different core diameters, hydrodynamic sizes, and surface charges. Based on the observed relationships among the experimental data, a semi-quantitative cellular SiO2 NPs estimation method from their normalized side scattering and core diameters was proposed, which can be applied for the determination of cellular SiO2 NPs within their size-dependent linear ranges. © 2014 International Society for Advancement of Cytometry.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cifelli, R.; Chen, H.; Chandrasekar, V.
2017-12-01
A recent study by the State of California's Department of Water Resources has emphasized that the San Francisco Bay Area is at risk of catastrophic flooding. Therefore, accurate quantitative precipitation estimation (QPE) and forecast (QPF) are critical for protecting life and property in this region. Compared to rain gauge and meteorological satellite, ground based radar has shown great advantages for high-resolution precipitation observations in both space and time domain. In addition, the polarization diversity shows great potential to characterize precipitation microphysics through identification of different hydrometeor types and their size and shape information. Currently, all the radars comprising the U.S. National Weather Service (NWS) Weather Surveillance Radar-1988 Doppler (WSR-88D) network are operating in dual-polarization mode. Enhancement of QPE is one of the main considerations of the dual-polarization upgrade. The San Francisco Bay Area is covered by two S-band WSR-88D radars, namely, KMUX and KDAX. However, in complex terrain like the Bay Area, it is still challenging to obtain an optimal rainfall algorithm for a given set of dual-polarization measurements. In addition, the accuracy of rain rate estimates is contingent on additional factors such as bright band contamination, vertical profile of reflectivity (VPR) correction, and partial beam blockages. This presentation aims to improve radar QPE for the Bay area using advanced dual-polarization rainfall methodologies. The benefit brought by the dual-polarization upgrade of operational radar network is assessed. In addition, a pilot study of gap fill X-band radar performance is conducted in support of regional QPE system development. This paper also presents a detailed comparison between the dual-polarization radar-derived rainfall products with various operational products including the NSSL's Multi-Radar/Multi-Sensor (MRMS) system. Quantitative evaluation of various rainfall products is achieved using rainfall measurements from a validation gauge network, which shows that new dual-polarization methods can produce better QPE, and the X-band radar has excellent potential to augment WSR-88D for rainfall monitoring in this region.
The Effect of Pickling on Blue Borscht Gelatin and Other Interesting Diffusive Phenomena.
ERIC Educational Resources Information Center
Davis, Lawrence C.; Chou, Nancy C.
1998-01-01
Presents some simple demonstrations that students can construct for themselves in class to learn the difference between diffusion and convection rates. Uses cabbage leaves and gelatin and focuses on diffusion in ungelified media, a quantitative diffusion estimate with hydroxyl ions, and a quantitative diffusion estimate with photons. (DDR)
Shope, Christopher L.; Angeroth, Cory E.
2015-01-01
Effective management of surface waters requires a robust understanding of spatiotemporal constituent loadings from upstream sources and the uncertainty associated with these estimates. We compared the total dissolved solids loading into the Great Salt Lake (GSL) for water year 2013 with estimates of previously sampled periods in the early 1960s.We also provide updated results on GSL loading, quantitatively bounded by sampling uncertainties, which are useful for current and future management efforts. Our statistical loading results were more accurate than those from simple regression models. Our results indicate that TDS loading to the GSL in water year 2013 was 14.6 million metric tons with uncertainty ranging from 2.8 to 46.3 million metric tons, which varies greatly from previous regression estimates for water year 1964 of 2.7 million metric tons. Results also indicate that locations with increased sampling frequency are correlated with decreasing confidence intervals. Because time is incorporated into the LOADEST models, discrepancies are largely expected to be a function of temporally lagged salt storage delivery to the GSL associated with terrestrial and in-stream processes. By incorporating temporally variable estimates and statistically derived uncertainty of these estimates,we have provided quantifiable variability in the annual estimates of dissolved solids loading into the GSL. Further, our results support the need for increased monitoring of dissolved solids loading into saline lakes like the GSL by demonstrating the uncertainty associated with different levels of sampling frequency.
Effects of time-shifted data on flight determined stability and control derivatives
NASA Technical Reports Server (NTRS)
Steers, S. T.; Iliff, K. W.
1975-01-01
Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.
Dae-Kwan Kim; Daniel M. Spotts; Donald F. Holecek
1998-01-01
This paper compares estimates of pleasure trip volume and expenditures derived from a regional telephone survey to those derived from the TravelScope mail panel survey. Significantly different estimates emerged, suggesting that survey-based estimates of pleasure trip volume and expenditures, at least in the case of the two surveys examined, appear to be affected by...
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
Magotra, Asmita; Sharma, Anjna; Gupta, Ajai Prakash; Wazir, Priya; Sharma, Shweta; Singh, Parvinder Pal; Tikoo, Manoj Kumar; Vishwakarma, Ram A; Singh, Gurdarshan; Nandi, Utpal
2017-08-15
In the present study, a simple, sensitive, specific and rapid liquid chromatography (LC) tandem mass spectrometry (MS/MS) method was developed and validated according to the Food and Drug Administration (FDA) guidelines for estimation of IIIM-MCD-211 (a potent oral candidate with promising action against tuberculosis) in mice plasma using carbamazepine as internal standard (IS). Bioanalytical method consisted of one step protein precipitation for sample preparation followed by quantitation in LC-MS/MS using positive electrospray ionization technique (ESI) operating in multiple reaction monitoring (MRM) mode. Elution was achieved in gradient mode on High Resolution Chromolith RP-18e column with mobile phase comprised of acetonitrile and 0.1% (v/v) formic acid in water at the flow rate of 0.4mL/min. Precursor to product ion transitions (m/z 344.5/218.4 and m/z 237.3/194.2) were used to measure analyte and IS, respectively. All validation parameters were well within the limit of acceptance criteria. The method was successfully applied to assess the pharmacokinetics of the candidate in mice following oral (10mg/kg) and intravenous (IV; 2.5mg/kg) administration. It was also effectively used to quantitate metabolic stability of the compound in mouse liver microsomes (MLM) and human liver microsomes (HLM) followed by its in-vitro-in-vivo extrapolation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Prat, O. P.; Nelson, B. R.; Nickl, E.; Ferraro, R. R.
2017-12-01
This study evaluates the ability of different satellite-based precipitation products to capture daily precipitation extremes over the entire globe. The satellite products considered are the datasets belonging to the Reference Environmental Data Records (REDRs) program (PERSIANN-CDR, GPCP, CMORPH, AMSU-A,B, Hydrologic bundle). Those products provide long-term global records of daily adjusted Quantitative Precipitation Estimates (QPEs) that range from 20-year (CMORPH-CDR) to 35-year (PERSIANN-CDR, GPCP) record of daily adjusted global precipitation. The AMSU-A,B, Hydro-bundle is an 11-year record of daily rain rate over land and ocean, snow cover and surface temperature over land, and sea ice concentration, cloud liquid water, and total precipitable water over ocean among others. The aim of this work is to evaluate the ability of the different satellite QPE products to capture daily precipitation extremes. This evaluation will also include comparison with in-situ data sets at the daily scale from the Global Historical Climatology Network (GHCN-Daily), the Global Precipitation Climatology Centre (GPCC) gridded full data daily product, and the US Climate Reference Network (USCRN). In addition, while the products mentioned above only provide QPEs, the AMSU-A,B hydro-bundle provides additional hydrological information (precipitable water, cloud liquid water, snow cover, sea ice concentration). We will also present an analysis of those additional variables available from global satellite measurements and their relevance and complementarity in the context of long-term hydrological and climate studies.
NASA Astrophysics Data System (ADS)
Inoue, Yoshio; Kiyono, Yoshiyuki; Asai, Hidetoshi; Ochiai, Yukihito; Qi, Jiaguo; Olioso, Albert; Shiraiwa, Tatsuhiko; Horie, Takeshi; Saito, Kazuki; Dounagsavanh, Linkham
2010-08-01
In the tropical mountains of Southeast Asia, slash-and-burn (S/B) agriculture is a widely practiced and important food production system. The ecosystem carbon stock in this land-use is linked not only to the carbon exchange with the atmosphere but also with food and resource security. The objective of this study was to provide quantitative information on the land-use and ecosystem carbon stock in the region as well as to infer the impacts of alternative land-use and ecosystem management scenarios on the carbon sequestration potential at a regional scale. The study area was selected in a typical slash-and-burn region in the northern part of Laos. The chrono-sequential changes of land-use such as the relative areas of community age and cropping (C) + fallow (F) patterns were derived from the analysis of time-series satellite images. The chrono-sequential analysis showed that a consistent increase of S/B area during the past three decades and a rapid increase after 1990. Approximately 37% of the whole area was with the community age of 1-5 years, whereas 10% for 6-10 years in 2004. The ecosystem carbon stock at a regional scale was estimated by synthesizing the land-use patterns and semi-empirical carbon stock model derived from in situ measurements where the community age was used as a clue to the linkage. The ecosystem carbon stock in the region was strongly affected by the land-use patterns; the temporal average of carbon stock in 1C + 10F cycles, for example, was greater by 33 MgC ha -1 compared to that in 1C + 2F land-use pattern. The amount of carbon lost from the regional ecosystems during 1990-2004 periods was estimated to be 42 MgC ha -1. The study approach proved to be useful especially in such regions with low data-availability and accessibility. This study revealed the dynamic change of land-use and ecosystem carbon stock in the tropical mountain of Laos as affected by land-use. Results suggest the significant potential of carbon sequestration through changing land-use and ecosystem management scenarios. These quantitative estimates would be useful to better understand and manage the land-use and ecosystem carbon stock towards higher sustainability and food security in similar ecosystems.
We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
r...
Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie
2012-06-01
Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.
Estimating the extent of impervious surfaces and turf grass across large regions
Claggett, Peter; Irani, Frederick M.; Thompson, Renee L.
2013-01-01
The ability of researchers to accurately assess the extent of impervious and pervious developed surfaces, e.g., turf grass, using land-cover data derived from Landsat satellite imagery in the Chesapeake Bay watershed is limited due to the resolution of the data and systematic discrepancies between developed land-cover classes, surface mines, forests, and farmlands. Estimates of impervious surface and turf grass area in the Mid-Atlantic, United States that were based on 2006 Landsat-derived land-cover data were substantially lower than estimates based on more authoritative and independent sources. New estimates of impervious surfaces and turf grass area derived using land-cover data combined with ancillary information on roads, housing units, surface mines, and sampled estimates of road width and residential impervious area were up to 57 and 45% higher than estimates based strictly on land-cover data. These new estimates closely approximate estimates derived from authoritative and independent sources in developed counties.
Leung, Gary N W; Ho, Emmie N M; Kwok, W Him; Leung, David K K; Tang, Francis P W; Wan, Terence S M; Wong, April S Y; Wong, Colton H F; Wong, Jenny K Y; Yu, Nola H
2007-09-07
Quantitative determination, particularly for threshold substances in biological samples, is much more demanding than qualitative identification. A proper assessment of any quantitative determination is the measurement uncertainty (MU) associated with the determined value. The International Standard ISO/IEC 17025, "General requirements for the competence of testing and calibration laboratories", has more prescriptive requirements on the MU than its superseded document, ISO/IEC Guide 25. Under the 2005 or 1999 versions of the new standard, an estimation of the MU is mandatory for all quantitative determinations. To comply with the new requirement, a protocol was established in the authors' laboratory in 2001. The protocol has since evolved based on our practical experience, and a refined version was adopted in 2004. This paper describes our approach in establishing the MU, as well as some other important considerations, for the quantification of threshold substances in biological samples as applied in the area of doping control for horses. The testing of threshold substances can be viewed as a compliance test (or testing to a specified limit). As such, it should only be necessary to establish the MU at the threshold level. The steps in a "Bottom-Up" approach adopted by us are similar to those described in the EURACHEM/CITAC guide, "Quantifying Uncertainty in Analytical Measurement". They involve first specifying the measurand, including the relationship between the measurand and the input quantities upon which it depends. This is followed by identifying all applicable uncertainty contributions using a "cause and effect" diagram. The magnitude of each uncertainty component is then calculated and converted to a standard uncertainty. A recovery study is also conducted to determine if the method bias is significant and whether a recovery (or correction) factor needs to be applied. All standard uncertainties with values greater than 30% of the largest one are then used to derive the combined standard uncertainty. Finally, an expanded uncertainty is calculated at 99% one-tailed confidence level by multiplying the standard uncertainty with an appropriate coverage factor (k). A sample is considered positive if the determined concentration of the threshold substance exceeds its threshold by the expanded uncertainty. In addition, other important considerations, which can have a significant impact on quantitative analyses, will be presented.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
Otazú, Ivone B; Tavares, Rita de Cassia B; Hassan, Rocío; Zalcberg, Ilana; Tabak, Daniel G; Seuánez, Héctor N
2002-02-01
Serial assays of qualitative (multiplex and nested) and quantitative PCR were carried out for detecting and estimating the level of BCR-ABL transcripts in 39 CML patients following bone marrow transplantation. Seven of these patients, who received donor lymphocyte infusions (DLIs) following to relapse, were also monitored. Quantitative estimates of BCR-ABL transcripts were obtained by co-amplification with a competitor sequence. Estimates of ABL transcripts were used, an internal control and the ratio BCR-ABL/ABL was thus estimated for evaluating the kinetics of residual clones. Twenty four patients were followed shortly after BMT; two of these patients were in cytogenetic relapse coexisting with very high BCR-ABL levels while other 22 were in clinical, haematologic and cytogenetic remission 2-42 months after BMT. In this latter group, seven patients showed a favourable clinical-haematological progression in association with molecular remission while in 14 patients quantitative PCR assays indicated molecular relapse that was not associated with an early cytogenetic-haematologic relapse. BCR-ABL/ABL levels could not be correlated with presence of GVHD in 24 patients after BMT. In all seven patients treated with DLI, high levels of transcripts were detected at least 4 months before the appearance of clinical haematological relapse. Following DLI, five of these patients showed decreasing transcript levels from 2 to 5 logs between 4 and 12 months. In eight other patients studied long after BMT, five showed molecular relapse up to 117 months post-BMT and only one showed cytogenetic relapse. Our findings indicated that quantitative estimates of BCR-ABL transcripts were valuable for monitoring minimal residual disease in each patient.
Hofland, J; Tenbrinck, R; van Eijck, C H J; Eggermont, A M M; Gommers, D; Erdmann, W
2003-04-01
Agreement between continuously measured oxygen consumption during quantitative closed system anaesthesia and intermittently Fick-derived calculated oxygen consumption was assessed in 11 patients undergoing simultaneous occlusion of the aorta and inferior vena cava for hypoxic treatment of pancreatic cancer. All patients were mechanically ventilated using a quantitative closed system anaesthesia machine (PhysioFlex) and had pulmonary and radial artery catheters inserted. During the varying haemodynamic conditions that accompany this procedure, 73 paired measurements were obtained. A significant correlation between Fick-derived and closed system-derived oxygen consumption was found (r = 0.78, p = 0.006). Linear regression showed that Fick-derived measure = [(1.19 x closed system derived measure) - 72], with the overall closed circuit-derived values being higher. However, the level of agreement between the two techniques was poor. Bland-Altman analysis found that the bias was 36 ml.min(-1), precision 39 ml.min(-1), difference between 95% limits of agreement 153 ml.min(-1). Therefore, we conclude that the two measurement techniques are not interchangeable in a clinical setting.
Nishiyama, K K; Macdonald, H M; Hanley, D A; Boyd, S K
2013-05-01
High-resolution peripheral quantitative computed tomography (HR-pQCT) measurements of distal radius and tibia bone microarchitecture and finite element (FE) estimates of bone strength performed well at classifying postmenopausal women with and without previous fracture. The HR-pQCT measurements outperformed dual energy x-ray absorptiometry (DXA) at classifying forearm fractures and fractures at other skeletal sites. Areal bone mineral density (aBMD) is the primary measurement used to assess osteoporosis and fracture risk; however, it does not take into account bone microarchitecture, which also contributes to bone strength. Thus, our objective was to determine if bone microarchitecture measured with HR-pQCT and FE estimates of bone strength could classify women with and without low-trauma fractures. We used HR-pQCT to assess bone microarchitecture at the distal radius and tibia in 44 postmenopausal women with a history of low-trauma fracture and 88 age-matched controls from the Calgary cohort of the Canadian Multicentre Osteoporosis Study (CaMos) study. We estimated bone strength using FE analysis and simulated distal radius aBMD from the HR-pQCT scans. Femoral neck (FN) and lumbar spine (LS) aBMD were measured with DXA. We used support vector machines (SVM) and a tenfold cross-validation to classify the fracture cases and controls and to determine accuracy. The combination of HR-pQCT measures of microarchitecture and FE estimates of bone strength had the highest area under the receiver operating characteristic (ROC) curve of 0.82 when classifying forearm fractures compared to an area under the curve (AUC) of 0.71 from DXA-derived aBMD of the forearm and 0.63 from FN and spine DXA. For all fracture types, FE estimates of bone strength at the forearm alone resulted in an AUC of 0.69. Models based on HR-pQCT measurements of bone microarchitecture and estimates of bone strength performed better than DXA-derived aBMD at classifying women with and without prior fracture. In future, these models may improve prediction of individuals at risk of low-trauma fracture.
NASA Astrophysics Data System (ADS)
Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.
2016-12-01
Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for characterizing the error in precipitation by SM2RAIN would be highly useful for the merging and the integration steps in its algorithm, i.e., the merging of multiple soil moisture derived products (e.g., SMAP, SMOS, ASCAT) and the integration of soil moisture derived and state of the art satellite precipitation products (e.g., GPM IMERG).
Simulating realistic predator signatures in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.
2015-01-01
Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.
Chiang, Chia-Wen; Wang, Yong; Sun, Peng; Lin, Tsen-Hsuan; Trinkaus, Kathryn; Cross, Anne H.; Song, Sheng-Kwei
2014-01-01
The effect of extra-fiber structural and pathological components confounding diffusion tensor imaging (DTI) computation was quantitatively investigated using data generated by both Monte-Carlo simulations and tissue phantoms. Increased extent of vasogenic edema, by addition of various amount of gel to fixed normal mouse trigeminal nerves or by increasing non-restricted isotropic diffusion tensor components in Monte-Carlo simulations, significantly decreased fractional anisotropy (FA), increased radial diffusivity, while less significantly increased axial diffusivity derived by DTI. Increased cellularity, mimicked by graded increase of the restricted isotropic diffusion tensor component in Monte-Carlo simulations, significantly decreased FA and axial diffusivity with limited impact on radial diffusivity derived by DTI. The MC simulation and tissue phantom data were also analyzed by the recently developed diffusion basis spectrum imaging (DBSI) to simultaneously distinguish and quantify the axon/myelin integrity and extra-fiber diffusion components. Results showed that increased cellularity or vasogenic edema did not affect the DBSI-derived fiber FA, axial or radial diffusivity. Importantly, the extent of extra-fiber cellularity and edema estimated by DBSI correlated with experimentally added gel and Monte-Carlo simulations. We also examined the feasibility of applying 25-direction diffusion encoding scheme for DBSI analysis on coherent white matter tracts. Results from both phantom experiments and simulations suggested that the 25-direction diffusion scheme provided comparable DBSI estimation of both fiber diffusion parameters and extra-fiber cellularity/edema extent as those by 99-direction scheme. An in vivo 25-direction DBSI analysis was performed on experimental autoimmune encephalomyelitis (EAE, an animal model of human multiple sclerosis) optic nerve as an example to examine the validity of derived DBSI parameters with post-imaging immunohistochemistry verification. Results support that in vivo DBSI using 25-direction diffusion scheme correctly reflect the underlying axonal injury, demyelination, and inflammation of optic nerves in EAE mice. PMID:25017446
NASA Astrophysics Data System (ADS)
Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte
2007-01-01
We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.
Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-27
... indirectly through changes in regional climate; and (b) Quantitative research on the relationship of food...). Population Estimates The most quantitative estimate of the historic size of the African lion population... research conducted by Chardonnet et al., three subpopulations were described as consisting of 18 groups...
Lang, Roman; Bardelmeier, Ina; Weiss, Carola; Rubach, Malte; Somoza, Veronika; Hofmann, Thomas
2010-02-10
A straightforward stable isotope dilution analysis (SIDA) for the reliable quantitative determination of (beta)N-C(18:0)- to (beta)N-C(24:0)-alkanoyl-5-hydroxytryptamides (C5HTs) in coffee powder and beverages by means of LC-MS/MS was developed. The developed SIDA showing accuracy values of 92.6-107% and precision between 0.5 and 7% relative standard deviation for the individual derivatives allowed the sensitive and selective quantification of the target compounds in coffee beverages. Depending on the type of coffee, quantitation revealed C5HT levels between 65 and 144 microg/L in filtered coffee and up to 3500 microg/L in a French press beverage, thus indicating that about 0.3 or 7.2% of the C5HTs were extracted from the coffee powder into the beverage when using the cellulose filter method or the French press, respectively. To estimate the potential contribution of the C5HTs to the phenomenon of stomach irritation after ingestion of coffee brew, in vitro cell studies were performed with pure individual 5-hydroxytryptamides and a mixture of the predominating derivatives in ratios matching those found in coffee. All substances tested induced a decrease in the intracellular proton index (IPX) coined as an indicator of stomach acid secretion. While the biomimetic C5HT mixture was highest in its inducing effect, the individual stearic acid, oleic acid, and linoleic acid 5-hydroxytryptamide did not differ significantly from each other, but showed a less pronounced effect compared to arachinic acid 5-hydroxytryptamide. In conclusion, not the grade of saturation seems to determine the C5HT's mode of action in driving the stomach acid secretion, rather than the fatty acid chain length.
Quantitative photoacoustic assessment of ex-vivo lymph nodes of colorectal cancer patients
NASA Astrophysics Data System (ADS)
Sampathkumar, Ashwin; Mamou, Jonathan; Saegusa-Beercroft, Emi; Chitnis, Parag V.; Machi, Junji; Feleppa, Ernest J.
2015-03-01
Staging of cancers and selection of appropriate treatment requires histological examination of multiple dissected lymph nodes (LNs) per patient, so that a staggering number of nodes require histopathological examination, and the finite resources of pathology facilities create a severe processing bottleneck. Histologically examining the entire 3D volume of every dissected node is not feasible, and therefore, only the central region of each node is examined histologically, which results in severe sampling limitations. In this work, we assess the feasibility of using quantitative photoacoustics (QPA) to overcome the limitations imposed by current procedures and eliminate the resulting under sampling in node assessments. QPA is emerging as a new hybrid modality that assesses tissue properties and classifies tissue type based on multiple estimates derived from spectrum analysis of photoacoustic (PA) radiofrequency (RF) data and from statistical analysis of envelope-signal data derived from the RF signals. Our study seeks to use QPA to distinguish cancerous from non-cancerous regions of dissected LNs and hence serve as a reliable means of imaging and detecting small but clinically significant cancerous foci that would be missed by current methods. Dissected lymph nodes were placed in a water bath and PA signals were generated using a wavelength-tunable (680-950 nm) laser. A 26-MHz, f-2 transducer was used to sense the PA signals. We present an overview of our experimental setup; provide a statistical analysis of multi-wavelength classification parameters (mid-band fit, slope, intercept) obtained from the PA signal spectrum generated in the LNs; and compare QPA performance with our established quantitative ultrasound (QUS) techniques in distinguishing metastatic from non-cancerous tissue in dissected LNs. QPA-QUS methods offer a novel general means of tissue typing and evaluation in a broad range of disease-assessment applications, e.g., cardiac, intravascular, musculoskeletal, endocrine-gland, etc.
Zheng, Hailiang; Li, Ming; Yin, Pengbin; Peng, Ye; Gao, Yuan; Zhang, Lihai; Tang, Peifu
2015-01-01
Background Calcaneal quantitative ultrasound (QUS), which is used in the evaluation of osteoporosis, is believed to be intimately associated with the characteristics of the proximal femur. However, the specific associations of calcaneal QUS with characteristics of the hip sub-regions remain unclear. Design A cross-sectional assessment of 53 osteoporotic patients was performed for the skeletal status of the heel and hip. Methods We prospectively enrolled 53 female osteoporotic patients with femoral fractures. Calcaneal QUS, dual energy X-ray absorptiometry (DXA), and hip structural analysis (HSA) were performed for each patient. Femoral heads were obtained during the surgery, and principal compressive trabeculae (PCT) were extracted by a three-dimensional printing technique-assisted method. Pearson’s correlation between QUS measurement with DXA, HSA-derived parameters and Young’s modulus were calculated in order to evaluate the specific association of QUS with the parameters for the hip sub-regions, including the femoral neck, trochanteric and Ward’s areas, and the femoral shaft, respectively. Results Significant correlations were found between estimated BMD (Est.BMD) and BMD of different sub-regions of proximal femur. However, the correlation coefficient of trochanteric area (r = 0.356, p = 0.009) was higher than that of the neck area (r = 0.297, p = 0.031) and total proximal femur (r = 0.291, p = 0.034). Furthermore, the quantitative ultrasound index (QUI) was significantly correlated with the HSA-derived parameters of the trochanteric area (r value: 0.315–0.356, all p<0.05) as well as with the Young’s modulus of PCT from the femoral head (r = 0.589, p<0.001). Conclusion The calcaneal bone had an intimate association with the trochanteric cancellous bone. To a certain extent, the parameters of the calcaneal QUS can reflect the characteristics of the trochanteric area of the proximal hip, although not specifically reflective of those of the femoral neck or shaft. PMID:26710123
Estimation of sample size and testing power (part 5).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-02-01
Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.
Zhou, Yun; Sojkova, Jitka; Resnick, Susan M; Wong, Dean F
2012-04-01
Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVRs) in ligand-receptor dynamic PET studies. The objective of this study was to use a recently developed relative equilibrium-based graphical (RE) plot method to improve and simplify the 2 commonly used methods for quantification of (11)C-Pittsburgh compound B ((11)C-PiB) PET. The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight (11)C-PiB dynamic PET scans (66 from controls and 12 from participants with mild cognitive impaired [MCI] from the Baltimore Longitudinal Study of Aging) were acquired over 90 min. Regions of interest (ROIs) were defined on coregistered MR images. Both the ROI and the pixelwise time-activity curves were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI time-activity curves were used as a reference for comparison of DVR estimates. Results from the theoretic analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI time-activity curves. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, and cingulate regions and the striatum were underestimated by the Logan plot (controls, 4%-12%; MCI, 9%-16%) and overestimated by the SUVR (controls, 8%-16%; MCI, 16%-24%). This bias was higher in the MCI group than in controls (P < 0.01) but was not present when data were analyzed using either the RE plot or the bcSUVR. The RE plot improves pixelwise quantification of (11)C-PiB dynamic PET, compared with the conventional Logan plot. The bcSUVR results in lower bias and higher consistency of DVR estimates than of SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of (11)C-PiB studies.
Zhou, Yun; Sojkova, Jitka; Resnick, Susan M.; Wong, Dean F.
2012-01-01
Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVR) in ligand-receptor dynamic PET studies. The objective of this study is to use a recently developed relative equilibrium-based graphical plot (RE plot) method to improve and simplify the two commonly used methods for quantification of [11C]PiB PET. Methods The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight [11C]PiB dynamic PET scans (66 from controls and 12 from mildly cognitively impaired participants (MCI) from the Baltimore Longitudinal Study of Aging (BLSA)) were acquired over 90 minutes. Regions of interest (ROIs) were defined on coregistered MRIs. Both the ROI and pixelwise time activity curves (TACs) were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI TACs were used as a reference for comparison of DVR estimates. Results Results from the theoretical analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI TACs. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, cingulate regions, and the striatum were underestimated by the Logan plot (controls 4 – 12%; MCI 9 – 16%) and overestimated by the SUVR (controls 8 – 16%; MCI 16 – 24%). This bias was higher in the MCI group than in controls (p < 0.01) but was not present when data were analyzed using either the RE plot or the bcSUVR. Conclusion The RE plot improves pixel-wise quantification of [11C]PiB dynamic PET compared to the conventional Logan plot. The bcSUVR results in lower bias and higher consistency of DVR estimates compared to SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of [11C]PiB studies. PMID:22414634
Estimating Dynamical Systems: Derivative Estimation Hints from Sir Ronald A. Fisher
ERIC Educational Resources Information Center
Deboeck, Pascal R.
2010-01-01
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common…
Precision Orbit Derived Atmospheric Density: Development and Performance
NASA Astrophysics Data System (ADS)
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.
Lin, Long-Ze; Harnly, James M.
2013-01-01
A general method was developed for the systematic quantitation of flavanols, proanthocyanidins, isoflavones, flavanones, dihydrochalcones, stilbenes, and hydroxybenzoic acid derivatives (mainly hydrolyzable tannins) based on UV band II absorbance arising from the benzoyl structure. The compound structures and the wavelength maximum were well correlated and were divided into four groups: the flavanols and proanthocyanidins at 278 nm, hydrolyzable tannins at 274 nm, flavanones at 288 nm, and isoflavones at 260 nm. Within each group, molar relative response factors (MRRFs) were computed for each compound based on the absorbance ratio of the compound and the group reference standard. Response factors were computed for the compounds as purchased (MRRF), after drying (MRRFD), and as the best predicted value (MRRFP). Concentrations for each compound were computed based on calibration with the group reference standard and the MRRFP. The quantitation of catechins, proanthocyanidins, and gallic acid derivatives in white tea was used as an example. PMID:22577798
Fincel, Mark J.; James, Daniel A.; Chipps, Steven R.; Davis, Blake A.
2014-01-01
Diet studies have traditionally been used to determine prey use and food web dynamics, while stable isotope analysis provides for a time-integrated approach to evaluate food web dynamics and characterize energy flow in aquatic systems. Direct comparison of the two techniques is rare and difficult to conduct in large, species rich systems. We compared changes in walleye Sander vitreus trophic position (TP) derived from paired diet content and stable isotope analysis. Individual diet-derived TP estimates were dissimilar to stable isotope-derived TP estimates. However, cumulative diet-derived TP estimates integrated from May 2001 to May 2002 corresponded to May 2002 isotope-derived estimates of TP. Average walleye TP estimates from the spring season appear representative of feeding throughout the entire previous year.
Computing eddy-driven effective diffusivity using Lagrangian particles
Wolfram, Phillip J.; Ringler, Todd D.
2017-08-14
A novel method to derive effective diffusivity from Lagrangian particle trajectory data sets is developed and then analyzed relative to particle-derived meridional diffusivity for eddy-driven mixing in an idealized circumpolar current. Quantitative standard dispersion- and transport-based mixing diagnostics are defined, compared and contrasted to motivate the computation and use of effective diffusivity derived from Lagrangian particles. We compute the effective diffusivity by first performing scalar transport on Lagrangian control areas using stored trajectories computed from online Lagrangian In-situ Global High-performance particle Tracking (LIGHT) using the Model for Prediction Across Scales Ocean (MPAS-O). Furthermore, the Lagrangian scalar transport scheme is comparedmore » against an Eulerian scalar transport scheme. Spatially-variable effective diffusivities are computed from resulting time-varying cumulative concentrations that vary as a function of cumulative area. The transport-based Eulerian and Lagrangian effective diffusivity diagnostics are found to be qualitatively consistent with the dispersion-based diffusivity. All diffusivity estimates show a region of increased subsurface diffusivity within the core of an idealized circumpolar current and results are within a factor of two of each other. The Eulerian and Lagrangian effective diffusivities are most similar; smaller and more spatially diffused values are obtained with the dispersion-based diffusivity computed with particle clusters.« less
Computing eddy-driven effective diffusivity using Lagrangian particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfram, Phillip J.; Ringler, Todd D.
A novel method to derive effective diffusivity from Lagrangian particle trajectory data sets is developed and then analyzed relative to particle-derived meridional diffusivity for eddy-driven mixing in an idealized circumpolar current. Quantitative standard dispersion- and transport-based mixing diagnostics are defined, compared and contrasted to motivate the computation and use of effective diffusivity derived from Lagrangian particles. We compute the effective diffusivity by first performing scalar transport on Lagrangian control areas using stored trajectories computed from online Lagrangian In-situ Global High-performance particle Tracking (LIGHT) using the Model for Prediction Across Scales Ocean (MPAS-O). Furthermore, the Lagrangian scalar transport scheme is comparedmore » against an Eulerian scalar transport scheme. Spatially-variable effective diffusivities are computed from resulting time-varying cumulative concentrations that vary as a function of cumulative area. The transport-based Eulerian and Lagrangian effective diffusivity diagnostics are found to be qualitatively consistent with the dispersion-based diffusivity. All diffusivity estimates show a region of increased subsurface diffusivity within the core of an idealized circumpolar current and results are within a factor of two of each other. The Eulerian and Lagrangian effective diffusivities are most similar; smaller and more spatially diffused values are obtained with the dispersion-based diffusivity computed with particle clusters.« less
Gallignani, Máximo; Rondón, Rebeca A.; Ovalles, José F.; Brunetto, María R.
2014-01-01
A Fourier transform infrared derivative spectroscopy (FTIR-DS) method has been developed for determining furosemide (FUR) in pharmaceutical solid dosage form. The method involves the extraction of FUR from tablets with N,N-dimethylformamide by sonication and direct measurement in liquid phase mode using a reduced path length cell. In general, the spectra were measured in transmission mode and the equipment was configured to collect a spectrum at 4 cm−1 resolution and a 13 s collection time (10 scans co-added). The spectra were collected between 1400 cm−1 and 450 cm−1. Derivative spectroscopy was used for data processing and quantitative measurement using the peak area of the second order spectrum of the major spectral band found at 1165 cm−1 (SO2 stretching of FUR) with baseline correction. The method fulfilled most validation requirements in the 2 mg/mL and 20 mg/mL range, with a 0.9998 coefficient of determination obtained by simple calibration model, and a general coefficient of variation <2%. The mean recovery for the proposed assay method resulted within the (100±3)% over the 80%–120% range of the target concentration. The results agree with a pharmacopoeial method and, therefore, could be considered interchangeable. PMID:26579407
NASA Technical Reports Server (NTRS)
Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.
1989-01-01
A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.
NASA Astrophysics Data System (ADS)
Thiem, Christina; Sun, Liya; Müller, Benjamin; Bernhardt, Matthias; Schulz, Karsten
2014-05-01
Despite the importance of evapotranspiration for Meteorology, Hydrology and Agronomy, obtaining area-averaged evapotranspiration estimates is cost as well as maintenance intensive: usually area-averaged evapotranspiration estimates are obtained by distributed sensor networks or remotely sensed with a scintillometer. A low cost alternative for evapotranspiration estimates are satellite images, as many of them are freely available. This approach has been proven to be worthwhile above homogeneous terrain, and typically evapotranspiration data obtained with scintillometry are applied for validation. We will extend this approach to heterogeneous terrain: evapotranspiration estimates from ASTER 2013 images will be compared to scintillometer derived evapotranspiration estimates. The goodness of the correlation will be presented as well as an uncertainty estimation for both the ASTER derived and the scintillometer derived evapotranspiration.
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.
2013-09-01
This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.
Comparison of several methods for estimating low speed stability derivatives
NASA Technical Reports Server (NTRS)
Fletcher, H. S.
1971-01-01
Methods presented in five different publications have been used to estimate the low-speed stability derivatives of two unpowered airplane configurations. One configuration had unswept lifting surfaces, the other configuration was the D-558-II swept-wing research airplane. The results of the computations were compared with each other, with existing wind-tunnel data, and with flight-test data for the D-558-II configuration to assess the relative merits of the methods for estimating derivatives. The results of the study indicated that, in general, for low subsonic speeds, no one text appeared consistently better for estimating all derivatives.
Fang, Jiansong; Pang, Xiaocong; Wu, Ping; Yan, Rong; Gao, Li; Li, Chao; Lian, Wenwen; Wang, Qi; Liu, Ai-lin; Du, Guan-hua
2016-05-01
A dataset of 67 berberine derivatives for the inhibition of butyrylcholinesterase (BuChE) was studied based on the combination of quantitative structure-activity relationships models, molecular docking, and molecular dynamics methods. First, a series of berberine derivatives were reported, and their inhibitory activities toward butyrylcholinesterase (BuChE) were evaluated. By 2D- quantitative structure-activity relationships studies, the best model built by partial least-square had a conventional correlation coefficient of the training set (R(2)) of 0.883, a cross-validation correlation coefficient (Qcv2) of 0.777, and a conventional correlation coefficient of the test set (Rpred2) of 0.775. The model was also confirmed by Y-randomization examination. In addition, the molecular docking and molecular dynamics simulation were performed to better elucidate the inhibitory mechanism of three typical berberine derivatives (berberine, C2, and C55) toward BuChE. The predicted binding free energy results were consistent with the experimental data and showed that the van der Waals energy term (ΔEvdw) difference played the most important role in differentiating the activity among the three inhibitors (berberine, C2, and C55). The developed quantitative structure-activity relationships models provide details on the fine relationship linking structure and activity and offer clues for structural modifications, and the molecular simulation helps to understand the inhibitory mechanism of the three typical inhibitors. In conclusion, the results of this study provide useful clues for new drug design and discovery of BuChE inhibitors from berberine derivatives. © 2015 John Wiley & Sons A/S.
Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V
2017-03-19
The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.
Imaging Cerebral Microhemorrhages in Military Service Members with Chronic Traumatic Brain Injury
Liu, Wei; Soderlund, Karl; Senseney, Justin S.; Joy, David; Yeh, Ping-Hong; Ollinger, John; Sham, Elyssa B.; Liu, Tian; Wang, Yi; Oakes, Terrence R.; Riedy, Gerard
2017-01-01
Purpose To detect cerebral microhemorrhages in military service members with chronic traumatic brain injury by using susceptibility-weighted magnetic resonance (MR) imaging. The longitudinal evolution of microhemorrhages was monitored in a subset of patients by using quantitative susceptibility mapping. Materials and Methods The study was approved by the Walter Reed National Military Medical Center institutional review board and is compliant with HIPAA guidelines. All participants underwent two-dimensional conventional gradient-recalled-echo MR imaging and three-dimensional flow-compensated multi-echo gradient-recalled-echo MR imaging (processed to generate susceptibility-weighted images and quantitative susceptibility maps), and a subset of patients underwent follow-up imaging. Microhemorrhages were identified by two radiologists independently. Comparisons of microhemorrhage number, size, and magnetic susceptibility derived from quantitative susceptibility maps between baseline and follow-up imaging examinations were performed by using the paired t test. Results Among the 603 patients, cerebral microhemorrhages were identified in 43 patients, with six excluded for further analysis owing to artifacts. Seventy-seven percent (451 of 585) of the microhemorrhages on susceptibility-weighted images had a more conspicuous appearance than on gradient-recalled-echo images. Thirteen of the 37 patients underwent follow-up imaging examinations. In these patients, a smaller number of microhemorrhages were identified at follow-up imaging compared with baseline on quantitative susceptibility maps (mean ± standard deviation, 9.8 microhemorrhages ± 12.8 vs 13.7 microhemorrhages ± 16.6; P = .019). Quantitative susceptibility mapping–derived quantitative measures of microhemorrhages also decreased over time: −0.85 mm3 per day ± 1.59 for total volume (P = .039) and −0.10 parts per billion per day ± 0.14 for mean magnetic susceptibility (P = .016). Conclusion The number of microhemorrhages and quantitative susceptibility mapping–derived quantitative measures of microhemorrhages all decreased over time, suggesting that hemosiderin products undergo continued, subtle evolution in the chronic stage. PMID:26371749
Imaging Cerebral Microhemorrhages in Military Service Members with Chronic Traumatic Brain Injury.
Liu, Wei; Soderlund, Karl; Senseney, Justin S; Joy, David; Yeh, Ping-Hong; Ollinger, John; Sham, Elyssa B; Liu, Tian; Wang, Yi; Oakes, Terrence R; Riedy, Gerard
2016-02-01
To detect cerebral microhemorrhages in military service members with chronic traumatic brain injury by using susceptibility-weighted magnetic resonance (MR) imaging. The longitudinal evolution of microhemorrhages was monitored in a subset of patients by using quantitative susceptibility mapping. The study was approved by the Walter Reed National Military Medical Center institutional review board and is compliant with HIPAA guidelines. All participants underwent two-dimensional conventional gradient-recalled-echo MR imaging and three-dimensional flow-compensated multiecho gradient-recalled-echo MR imaging (processed to generate susceptibility-weighted images and quantitative susceptibility maps), and a subset of patients underwent follow-up imaging. Microhemorrhages were identified by two radiologists independently. Comparisons of microhemorrhage number, size, and magnetic susceptibility derived from quantitative susceptibility maps between baseline and follow-up imaging examinations were performed by using the paired t test. Among the 603 patients, cerebral microhemorrhages were identified in 43 patients, with six excluded for further analysis owing to artifacts. Seventy-seven percent (451 of 585) of the microhemorrhages on susceptibility-weighted images had a more conspicuous appearance than on gradient-recalled-echo images. Thirteen of the 37 patients underwent follow-up imaging examinations. In these patients, a smaller number of microhemorrhages were identified at follow-up imaging compared with baseline on quantitative susceptibility maps (mean ± standard deviation, 9.8 microhemorrhages ± 12.8 vs 13.7 microhemorrhages ± 16.6; P = .019). Quantitative susceptibility mapping-derived quantitative measures of microhemorrhages also decreased over time: -0.85 mm(3) per day ± 1.59 for total volume (P = .039) and -0.10 parts per billion per day ± 0.14 for mean magnetic susceptibility (P = .016). The number of microhemorrhages and quantitative susceptibility mapping-derived quantitative measures of microhemorrhages all decreased over time, suggesting that hemosiderin products undergo continued, subtle evolution in the chronic stage. © RSNA, 2015.
Budischak, Sarah A; Hoberg, Eric P; Abrams, Art; Jolles, Anna E; Ezenwa, Vanessa O
2015-09-01
Most hosts are concurrently or sequentially infected with multiple parasites; thus, fully understanding interactions between individual parasite species and their hosts depends on accurate characterization of the parasite community. For parasitic nematodes, noninvasive methods for obtaining quantitative, species-specific infection data in wildlife are often unreliable. Consequently, characterization of gastrointestinal nematode communities of wild hosts has largely relied on lethal sampling to isolate and enumerate adult worms directly from the tissues of dead hosts. The necessity of lethal sampling severely restricts the host species that can be studied, the adequacy of sample sizes to assess diversity, the geographic scope of collections and the research questions that can be addressed. Focusing on gastrointestinal nematodes of wild African buffalo, we evaluated whether accurate characterization of nematode communities could be made using a noninvasive technique that combined conventional parasitological approaches with molecular barcoding. To establish the reliability of this new method, we compared estimates of gastrointestinal nematode abundance, prevalence, richness and community composition derived from lethal sampling with estimates derived from our noninvasive approach. Our noninvasive technique accurately estimated total and species-specific worm abundances, as well as worm prevalence and community composition when compared to the lethal sampling method. Importantly, the rate of parasite species discovery was similar for both methods, and only a modest number of barcoded larvae (n = 10) were needed to capture key aspects of parasite community composition. Overall, this new noninvasive strategy offers numerous advantages over lethal sampling methods for studying nematode-host interactions in wildlife and can readily be applied to a range of study systems. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Huesca Martinez, M.; Garcia, M.; Roth, K. L.; Casas, A.; Ustin, S.
2015-12-01
There is a well-established need within the remote sensing community for improved estimation of canopy structure and understanding of its influence on the retrieval of leaf biochemical properties. The aim of this project was to evaluate the estimation of structural properties directly from hyperspectral data, with the broader goal that these might be used to constrain retrievals of canopy chemistry. We used NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) to discriminate different canopy structural types, defined in terms of biomass, canopy height and vegetation complexity, and compared them to estimates of these properties measured by LiDAR data. We tested a large number of optical metrics, including single narrow band reflectance and 1st derivative, sub-pixel cover fractions, narrow-band indices, spectral absorption features, and Principal Component Analysis components. Canopy structural types were identified and classified from different forest types by integrating structural traits measured by optical metrics using the Random Forest (RF) classifier. The classification accuracy was above 70% in most of the vegetation scenarios. The best overall accuracy was achieved for hardwood forest (>80% accuracy) and the lowest accuracy was found in mixed forest (~70% accuracy). Furthermore, similarly high accuracy was found when the RF classifier was applied to a spatially independent dataset, showing significant portability for the method used. Results show that all spectral regions played a role in canopy structure assessment, thus the whole spectrum is required. Furthermore, optical metrics derived from AVIRIS proved to be a powerful technique for structural attribute mapping. This research illustrates the potential for using optical properties to distinguish several canopy structural types in different forest types, and these may be used to constrain quantitative measurements of absorbing properties in future research.
Colborne, Scott F.; Rush, Scott A.; Paterson, Gordon; Johnson, Timothy B.; Lantry, Brian F.; Fisk, Aaron T.
2016-01-01
Recent development of multi-dimensional stable isotope models for estimating both foraging patterns and niches have presented the analytical tools to further assess the food webs of freshwater populations. One approach to refine predictions from these analyses is to include a third isotope to the more common two-isotope carbon and nitrogen mixing models to increase the power to resolve different prey sources. We compared predictions made with two-isotope carbon and nitrogen mixing models and three-isotope models that also included sulphur (δ34S) for the diets of Lake Ontario lake trout (Salvelinus namaycush). We determined the isotopic compositions of lake trout and potential prey fishes sampled from Lake Ontario and then used quantitative estimates of resource use generated by two- and three-isotope Bayesian mixing models (SIAR) to infer feeding patterns of lake trout. Both two- and three-isotope models indicated that alewife (Alosa pseudoharengus) and round goby (Neogobius melanostomus) were the primary prey items, but the three-isotope models were more consistent with recent measures of prey fish abundances and lake trout diets. The lake trout sampled directly from the hatcheries had isotopic compositions derived from the hatchery food which were distinctively different from those derived from the natural prey sources. Those hatchery signals were retained for months after release, raising the possibility to distinguish hatchery-reared yearlings and similarly sized naturally reproduced lake trout based on isotopic compositions. Addition of a third-isotope resulted in mixing model results that confirmed round goby have become an important component of lake trout diet and may be overtaking alewife as a prey resource.
Briggs, Brandi N; Stender, Michael E; Muljadi, Patrick M; Donnelly, Meghan A; Winn, Virginia D; Ferguson, Virginia L
2015-06-25
Clinical practice requires improved techniques to assess human cervical tissue properties, especially at the internal os, or orifice, of the uterine cervix. Ultrasound elastography (UE) holds promise for non-invasively monitoring cervical stiffness throughout pregnancy. However, this technique provides qualitative strain images that cannot be linked to a material property (e.g., Young's modulus) without knowledge of the contact pressure under a rounded transvaginal transducer probe and correction for the resulting non-uniform strain dissipation. One technique to standardize elastogram images incorporates a material of known properties and uses one-dimensional, uniaxial Hooke's law to calculate Young's modulus within the compressed material half-space. However, this method does not account for strain dissipation and the strains that evolve in three-dimensional space. We demonstrate that an analytical approach based on 3D Hertzian contact mechanics provides a reasonable first approximation to correct for UE strain dissipation underneath a round transvaginal transducer probe and thus improves UE-derived estimates of tissue modulus. We validate the proposed analytical solution and evaluate sources of error using a finite element model. As compared to 1D uniaxial Hooke's law, the Hertzian contact-based solution yields significantly improved Young's modulus predictions in three homogeneous gelatin tissue phantoms possessing different moduli. We also demonstrate the feasibility of using this technique to image human cervical tissue, where UE-derived moduli estimations for the uterine cervix anterior lip agreed well with published, experimentally obtained values. Overall, UE with an attached reference standard and a Hertzian contact-based correction holds promise for improving quantitative estimates of cervical tissue modulus. Copyright © 2015 Elsevier Ltd. All rights reserved.
A European-wide 222radon and 222radon progeny comparison study
NASA Astrophysics Data System (ADS)
Schmithüsen, Dominik; Chambers, Scott; Fischer, Bernd; Gilge, Stefan; Hatakka, Juha; Kazan, Victor; Neubert, Rolf; Paatero, Jussi; Ramonet, Michel; Schlosser, Clemens; Schmid, Sabine; Vermeulen, Alex; Levin, Ingeborg
2017-04-01
Although atmospheric 222radon (222Rn) activity concentration measurements are currently performed worldwide, they are being made by many different laboratories and with fundamentally different measurement principles, so compatibility issues can limit their utility for regional-to-global applications. Consequently, we conducted a European-wide 222Rn / 222Rn progeny comparison study in order to evaluate the different measurement systems in use, determine potential systematic biases between them, and estimate correction factors that could be applied to harmonize data for their use as a tracer in atmospheric applications. Two compact portable Heidelberg radon monitors (HRM) were moved around to run for at least 1 month at each of the nine European measurement stations included in this comparison. Linear regressions between parallel data sets were calculated, yielding correction factors relative to the HRM ranging from 0.68 to 1.45. A calibration bias between ANSTO (Australian Nuclear Science and Technology Organisation) two-filter radon monitors and the HRM of ANSTO / HRM = 1.11 ± 0.05 was found. Moreover, for the continental stations using one-filter systems that derive atmospheric 222Rn activity concentrations from measured atmospheric progeny activity concentrations, preliminary 214Po / 222Rn disequilibrium values were also estimated. Mean station-specific disequilibrium values between 0.8 at mountain sites (e.g. Schauinsland) and 0.9 at non-mountain sites for sampling heights around 20 to 30 m above ground level were determined. The respective corrections for calibration biases and disequilibrium derived in this study need to be applied to obtain a compatible European atmospheric 222Rn data set for use in quantitative applications, such as regional model intercomparison and validation or trace gas flux estimates with the radon tracer method.