Science.gov

Sample records for activity quantitative estimates

  1. Quantitative estimation of activity and quality for collections of functional genetic elements.

    PubMed

    Mutalik, Vivek K; Guimaraes, Joao C; Cambray, Guillaume; Mai, Quynh-Anh; Christoffersen, Marc Juul; Martin, Lance; Yu, Ayumi; Lam, Colin; Rodriguez, Cesar; Bennett, Gaymon; Keasling, Jay D; Endy, Drew; Arkin, Adam P

    2013-04-01

    The practice of engineering biology now depends on the ad hoc reuse of genetic elements whose precise activities vary across changing contexts. Methods are lacking for researchers to affordably coordinate the quantification and analysis of part performance across varied environments, as needed to identify, evaluate and improve problematic part types. We developed an easy-to-use analysis of variance (ANOVA) framework for quantifying the performance of genetic elements. For proof of concept, we assembled and analyzed combinations of prokaryotic transcription and translation initiation elements in Escherichia coli. We determined how estimation of part activity relates to the number of unique element combinations tested, and we show how to estimate expected ensemble-wide part activity from just one or two measurements. We propose a new statistic, biomolecular part 'quality', for tracking quantitative variation in part performance across changing contexts.

  2. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities

    PubMed Central

    Mansfield, Theodore J.; MacDonald Gibson, Jacqueline

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7–30.6), 0.6 (0.3–0.9), and 4.7 (2.1–7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832

  3. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities.

    PubMed

    Mansfield, Theodore J; MacDonald Gibson, Jacqueline

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7-30.6), 0.6 (0.3-0.9), and 4.7 (2.1-7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches.

  4. [Quantitative estimation of connection of the heart rate rhythm with motor activity in rat fetuses].

    PubMed

    Vdovichenko, N D; Timofeeva, O P; Bursian, A V

    2014-01-01

    In rat fetuses at E17-20 with preserved placental circulation with use of mathematical analysis there were revealed value and character of connections of slow wave oscillations of the heart rhythm with motor activity for 30 min of observation. In the software "PowerGraph 3.3.8", normalization and filtration of the studied signals were performed at three frequency diapasons: D1 - 0.02-0.2 Hz (5-50 s), D2 - 0.0083-0.02 Hz (50 s-2 min), and D3 - 0.0017-0.0083 Hz (2-10 min). The EMG curves filtrated by diapasons or piezograms were compared with periodograms in the corresponding diapasons of the heart rhythm variations. In the software "Origin 8.0", quantitative estimation of the degree of intersystemic interrelations for each frequency diapason was performed by Pearson correlation of coefficient, by the correlation connection value, and by the time shift of maximum of cross-correlation function. It has been established that in the frequency D1, regardless of age, the connection of heart rhythm oscillations with motor activity is expressed weakly. In the frequency diapason D2, the connection in most cases is located in the zone of weak and moderate correlations. In the multiminute diapason (D3), the connection is more pronounced. The number of animals that have a significant value of the correlation connection rises. The fetal MA fires in the decasecond diapason in all age groups are accompanied by short-time decelerations of the heart rhythms. In the minute diapason, there is observed a transition from positive connections at E17 and E18 to the negative ones at E19-20. Results of the study are considered in association with age-related changes of ratios of positive and negative oscillations of the heart rhythm change depending on the character of motor activity.

  5. Environmental Influences on Children's Physical Activity: Quantitative Estimates Using a Twin Design

    PubMed Central

    Fisher, Abigail; van Jaarsveld, Cornelia H. M.; Llewellyn, Clare H.; Wardle, Jane

    2010-01-01

    Background Twin studies offer a ‘natural experiment’ that can estimate the magnitude of environmental and genetic effects on a target phenotype. We hypothesised that fidgetiness and enjoyment of activity would be heritable but that objectively-measured daily activity would show a strong shared environmental effect. Methodology/Principal Findings In a sample of 9–12 year-old same-sex twin pairs (234 individuals; 57 MZ, 60 DZ pairs) we assessed three dimensions of physical activity: i) objectively-measured physical activity using accelerometry, ii) ‘fidgetiness’ using a standard psychometric scale, and iii) enjoyment of physical activity from both parent ratings and children's self-reports. Shared environment effects explained the majority (73%) of the variance in objectively-measured total physical activity (95% confidence intervals (CI): 0.63–0.81) with a smaller unshared environmental effect (27%; CI: 0.19–0.37) and no significant genetic effect. In contrast, fidgetiness was primarily under genetic control, with additive genetic effects explaining 75% (CI: 62–84%) of the variance, as was parent's report of children's enjoyment of low 74% (CI: 61–82%), medium 80% (CI: 71–86%), and high impact activity (85%; CI: 78–90%), and children's expressed activity preferences (60%, CI: 42–72%). Conclusions Consistent with our hypothesis, the shared environment was the dominant influence on children's day-to-day activity levels. This finding gives a strong impetus to research into the specific environmental characteristics influencing children's activity, and supports the value of interventions focused on home or school environments. PMID:20422046

  6. Estimating the Potential Toxicity of Chemicals Associated with Hydraulic Fracturing Operations Using Quantitative Structure-Activity Relationship Modeling.

    PubMed

    Yost, Erin E; Stanek, John; DeWoskin, Robert S; Burgoon, Lyle D

    2016-07-19

    The United States Environmental Protection Agency (EPA) identified 1173 chemicals associated with hydraulic fracturing fluids, flowback, or produced water, of which 1026 (87%) lack chronic oral toxicity values for human health assessments. To facilitate the ranking and prioritization of chemicals that lack toxicity values, it may be useful to employ toxicity estimates from quantitative structure-activity relationship (QSAR) models. Here we describe an approach for applying the results of a QSAR model from the TOPKAT program suite, which provides estimates of the rat chronic oral lowest-observed-adverse-effect level (LOAEL). Of the 1173 chemicals, TOPKAT was able to generate LOAEL estimates for 515 (44%). To address the uncertainty associated with these estimates, we assigned qualitative confidence scores (high, medium, or low) to each TOPKAT LOAEL estimate, and found 481 to be high-confidence. For 48 chemicals that had both a high-confidence TOPKAT LOAEL estimate and a chronic oral reference dose from EPA's Integrated Risk Information System (IRIS) database, Spearman rank correlation identified 68% agreement between the two values (permutation p-value =1 × 10(-11)). These results provide support for the use of TOPKAT LOAEL estimates in identifying and prioritizing potentially hazardous chemicals. High-confidence TOPKAT LOAEL estimates were available for 389 of 1026 hydraulic fracturing-related chemicals that lack chronic oral RfVs and OSFs from EPA-identified sources, including a subset of chemicals that are frequently used in hydraulic fracturing fluids.

  7. CORAL: quantitative structure-activity relationship models for estimating toxicity of organic compounds in rats.

    PubMed

    Toropova, A P; Toropov, A A; Benfenati, E; Gini, G; Leszczynska, D; Leszczynski, J

    2011-09-01

    For six random splits, one-variable models of rat toxicity (minus decimal logarithm of the 50% lethal dose [pLD50], oral exposure) have been calculated with CORAL software (http://www.insilico.eu/coral/). The total number of considered compounds is 689. New additional global attributes of the simplified molecular input line entry system (SMILES) have been examined for improvement of the optimal SMILES-based descriptors. These global SMILES attributes are representing the presence of some chemical elements and different kinds of chemical bonds (double, triple, and stereochemical). The "classic" scheme of building up quantitative structure-property/activity relationships and the balance of correlations (BC) with the ideal slopes were compared. For all six random splits, best prediction takes place if the aforementioned BC along with the global SMILES attributes are included in the modeling process. The average statistical characteristics for the external test set are the following: n = 119 ± 6.4, R(2) = 0.7371 ± 0.013, and root mean square error = 0.360 ± 0.037. Copyright © 2011 Wiley Periodicals, Inc.

  8. Quantitative estimation of minimum offset for multichannel surface-wave survey with actively exciting source

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2006-01-01

    Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.

  9. Estimating the persistence of organic contaminants in indirect potable reuse systems using quantitative structure activity relationship (QSAR).

    PubMed

    Lim, Seung Joo; Fox, Peter

    2012-09-01

    Predictions from the quantitative structure activity relationship (QSAR) model EPI Suite were modified to estimate the persistence of organic contaminants in indirect potable reuse systems. The modified prediction included the effects of sorption, biodegradation, and oxidation that may occur during sub-surface transport. A retardation factor was used to simulate the mobility of adsorbed compounds during sub-surface transport to a recovery well. A set of compounds with measured persistent properties during sub-surface transport was used to validate the results of the modifications to the predictions of EPI Suite. A comparison of the predicted values and measured values was done and the residual sum of the squares showed the importance of including oxidation and sorption. Sorption was the most important factor to include in predicting the fates of organic chemicals in the sub-surface environment.

  10. ESTIMATION OF MICROBIAL REDUCTIVE TRANSFORMATION RATES FOR CHLORINATED BENZENES AND PHENOLS USING A QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIP APPROACH

    EPA Science Inventory

    A set of literature data was used to derive several quantitative structure-activity relationships (QSARs) to predict the rate constants for the microbial reductive dehalogenation of chlorinated aromatics. Dechlorination rate constants for 25 chloroaromatics were corrected for th...

  11. ESTIMATION OF MICROBIAL REDUCTIVE TRANSFORMATION RATES FOR CHLORINATED BENZENES AND PHENOLS USING A QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIP APPROACH

    EPA Science Inventory

    A set of literature data was used to derive several quantitative structure-activity relationships (QSARs) to predict the rate constants for the microbial reductive dehalogenation of chlorinated aromatics. Dechlorination rate constants for 25 chloroaromatics were corrected for th...

  12. Quantitative estimates of relationships between geomagnetic activity and equatorial spread-F as determined by TID occurrence levels

    NASA Astrophysics Data System (ADS)

    Bowman, G. G.; Mortimer, I. K.

    2000-06-01

    Using a world-wide set of stations for 15 years, quantitative estimates of changes to equatorial spread-F (ESF) occurrence rates obtained from ionogram scalings, have been determined for a range of geomagnetic activity (GA) levels, as well as for four different levels of solar activity. Average occurrence rates were used as a reference. The percentage changes vary significantly depending on these subdivisions. For example for very high GA the inverse association is recorded by a change of -33% for Rz≥ 150, and -10% for Rz< 50. Using data for 9 years for the equatorial station, Huancayo, these measurements of ESF, which indicate the presence of TIDs, have also been investigated by somewhat similar analyses. Additional parameters were used which involved the local times of GA, with the ESF being examined separately for occurrence pre-midnight (PM) and after-midnight (AM). Again the negative changes were most pronounced for high GA in Rz-max years (-21%). This result is for PM ESF for GA at a local time of 1700. There were increased ESF levels (+31%) for AM ESF in Rz-min years for high GA around 2300 LT. This additional knowledge of the influence of GA on ESF occurrence involving not only percentage changes, but these values for a range of parameter levels, may be useful if ever short-term forecasts are needed. There is some discussion on comparisons which can be made between ESF results obtained by coherent scatter from incoherent-scatter equipment and those obtained by ionosondes.

  13. A Comparison of Three Quantitative Methods to Estimate G6PD Activity in the Chittagong Hill Tracts, Bangladesh

    PubMed Central

    Ley, Benedikt; Alam, Mohammad Shafiul; O’Donnell, James J.; Hossain, Mohammad Sharif; Kibria, Mohammad Golam; Jahan, Nusrat; Khan, Wasif A.; Thriemer, Kamala; Chatfield, Mark D.; Price, Ric N.; Richards, Jack S.

    2017-01-01

    Background Glucose-6-phosphate-dehydrogenase-deficiency (G6PDd) is a major risk factor for primaquine-induced haemolysis. There is a need for improved point-of-care and laboratory-based G6PD diagnostics to unsure safe use of primaquine. Methods G6PD activities of participants in a cross-sectional survey in Bangladesh were assessed using two novel quantitative assays, the modified WST-8 test and the CareStart™ G6PD Biosensor (Access Bio), The results were compared with a gold standard UV spectrophotometry assay (Randox). The handheld CareStart™ Hb instrument (Access Bio) is designed to be a companion instrument to the CareStart™ G6PD biosensor, and its performance was compared to the well-validated HemoCue™ method. All quantitative G6PD results were normalized with the HemoCue™ result. Results A total of 1002 individuals were enrolled. The adjusted male median (AMM) derived by spectrophotometry was 7.03 U/g Hb (interquartile range (IQR): 5.38–8.69), by WST-8 was 7.03 U/g Hb (IQR: 5.22–8.16) and by Biosensor was 8.61 U/g Hb (IQR: 6.71–10.08). The AMM between spectrophotometry and WST-8 did not differ (p = 1.0) but differed significantly between spectrophotometry and Biosensor (p<0.01). Both, WST-8 and Biosensor were correlated with spectrophotometry (rs = 0.5 and rs = 0.4, both p<0.001). The mean difference in G6PD activity was -0.12 U/g Hb (95% limit of agreement (95% LoA): -5.45 to 5.20) between spectrophotometry and WST-8 and -1.74U/g Hb (95% LoA: -7.63 to 4.23) between spectrophotometry and Biosensor. The WST-8 identified 55.1% (49/89) and the Biosensor 19.1% (17/89) of individuals with G6PD activity <30% by spectrophotometry. Areas under the ROC curve did not differ significantly for the WST-8 and Biosensor irrespective of the cut-off activity applied (all p>0.05). Sensitivity and specificity for detecting G6PD activity <30% was 0.55 (95% confidence interval (95%CI): 0.44–0.66) and 0.98 (95%CI: 0.97–0.99) respectively for the WST-8 and 0

  14. A Comparison of Three Quantitative Methods to Estimate G6PD Activity in the Chittagong Hill Tracts, Bangladesh.

    PubMed

    Ley, Benedikt; Alam, Mohammad Shafiul; O'Donnell, James J; Hossain, Mohammad Sharif; Kibria, Mohammad Golam; Jahan, Nusrat; Khan, Wasif A; Thriemer, Kamala; Chatfield, Mark D; Price, Ric N; Richards, Jack S

    2017-01-01

    Glucose-6-phosphate-dehydrogenase-deficiency (G6PDd) is a major risk factor for primaquine-induced haemolysis. There is a need for improved point-of-care and laboratory-based G6PD diagnostics to unsure safe use of primaquine. G6PD activities of participants in a cross-sectional survey in Bangladesh were assessed using two novel quantitative assays, the modified WST-8 test and the CareStart™ G6PD Biosensor (Access Bio), The results were compared with a gold standard UV spectrophotometry assay (Randox). The handheld CareStart™ Hb instrument (Access Bio) is designed to be a companion instrument to the CareStart™ G6PD biosensor, and its performance was compared to the well-validated HemoCue™ method. All quantitative G6PD results were normalized with the HemoCue™ result. A total of 1002 individuals were enrolled. The adjusted male median (AMM) derived by spectrophotometry was 7.03 U/g Hb (interquartile range (IQR): 5.38-8.69), by WST-8 was 7.03 U/g Hb (IQR: 5.22-8.16) and by Biosensor was 8.61 U/g Hb (IQR: 6.71-10.08). The AMM between spectrophotometry and WST-8 did not differ (p = 1.0) but differed significantly between spectrophotometry and Biosensor (p<0.01). Both, WST-8 and Biosensor were correlated with spectrophotometry (rs = 0.5 and rs = 0.4, both p<0.001). The mean difference in G6PD activity was -0.12 U/g Hb (95% limit of agreement (95% LoA): -5.45 to 5.20) between spectrophotometry and WST-8 and -1.74U/g Hb (95% LoA: -7.63 to 4.23) between spectrophotometry and Biosensor. The WST-8 identified 55.1% (49/89) and the Biosensor 19.1% (17/89) of individuals with G6PD activity <30% by spectrophotometry. Areas under the ROC curve did not differ significantly for the WST-8 and Biosensor irrespective of the cut-off activity applied (all p>0.05). Sensitivity and specificity for detecting G6PD activity <30% was 0.55 (95% confidence interval (95%CI): 0.44-0.66) and 0.98 (95%CI: 0.97-0.99) respectively for the WST-8 and 0.19 (95%CI: 0.12-0.29) and 0.99 (95%CI: 0

  15. A quantitative structure-activity approach for lipophilicity estimation of antitumor complexes of different metals using microemulsion electrokinetic chromatography.

    PubMed

    Foteeva, Lidia S; Trofimov, Denis A; Kuznetsova, Olga V; Kowol, Christian R; Arion, Vladimir B; Keppler, Bernhard K; Timerbaev, Andrei R

    2011-06-01

    Microemulsion electrokinetic chromatography (MEEKC) offers a valuable tool for the rapid and highly productive determination of lipophilicity for metal-based anticancer agents. In this investigation, the MEEKC technique was applied for estimation of n-octanol-water partition coefficient (logP(oct)) of a series of antiproliferative complexes of gallium(III) and iron(III) with (4)N-substituted α-N-heterocyclic thiosemicarbazones. Analysis of relationships between the experimental logP(oct) and the retention factors of compounds showed their satisfactory consistency in the case of single metal sets, as well as for both metals. Since none of available calculation programs allows for evaluating the contribution of central metal ion into logP(oct) (i.e. ΔlogP(oct)) of complexes of different metals, this parameter was measured experimentally, by the standard 'shake-flask' method. Extension of the logP(oct) programs by adding ΔlogP(oct) data resulted in good lipophilicity predictions for the complexes of gallium(III) and iron(III) included in one regression set. Comparison of metal-thiosemicarbazonates under examination in terms of logP(oct) vs. antiproliferative activities (i.e. 50% inhibitory concentration in cancer cells) provided evidence that their cytotoxic potency is associated with the ability to cross the lipid bilayer of the cell-membrane via passive diffusion. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    SciTech Connect

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-02-15

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  17. The Impact of 3D Volume-of-Interest Definition on Accuracy and Precision of Activity Estimation in Quantitative SPECT and Planar Processing Methods

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ

  18. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  19. Estimating the Potential Toxicity of Chemicals Associated with Hydraulic Fracturing Operations Using Quantitative Structure-Activity Relationship Modeling

    EPA Pesticide Factsheets

    Researchers facilitated evaluation of chemicals that lack chronic oral toxicity values using a QSAR model to develop estimates of potential toxicity for chemicals used in HF fluids or found in flowback or produced water

  20. Two quantitative approaches for estimating content validity.

    PubMed

    Wynd, Christine A; Schmidt, Bruce; Schaefer, Michelle Atkins

    2003-08-01

    Instrument content validity is often established through qualitative expert reviews, yet quantitative analysis of reviewer agreements is also advocated in the literature. Two quantitative approaches to content validity estimations were compared and contrasted using a newly developed instrument called the Osteoporosis Risk Assessment Tool (ORAT). Data obtained from a panel of eight expert judges were analyzed. A Content Validity Index (CVI) initially determined that only one item lacked interrater proportion agreement about its relevance to the instrument as a whole (CVI = 0.57). Concern that higher proportion agreement ratings might be due to random chance stimulated further analysis using a multirater kappa coefficient of agreement. An additional seven items had low kappas, ranging from 0.29 to 0.48 and indicating poor agreement among the experts. The findings supported the elimination or revision of eight items. Pros and cons to using both proportion agreement and kappa coefficient analysis are examined.

  1. Rapid Quantitative Pharmacodynamic Imaging with Bayesian Estimation

    PubMed Central

    Koller, Jonathan M.; Vachon, M. Jonathan; Bretthorst, G. Larry; Black, Kevin J.

    2016-01-01

    We recently described rapid quantitative pharmacodynamic imaging, a novel method for estimating sensitivity of a biological system to a drug. We tested its accuracy in simulated biological signals with varying receptor sensitivity and varying levels of random noise, and presented initial proof-of-concept data from functional MRI (fMRI) studies in primate brain. However, the initial simulation testing used a simple iterative approach to estimate pharmacokinetic-pharmacodynamic (PKPD) parameters, an approach that was computationally efficient but returned parameters only from a small, discrete set of values chosen a priori. Here we revisit the simulation testing using a Bayesian method to estimate the PKPD parameters. This improved accuracy compared to our previous method, and noise without intentional signal was never interpreted as signal. We also reanalyze the fMRI proof-of-concept data. The success with the simulated data, and with the limited fMRI data, is a necessary first step toward further testing of rapid quantitative pharmacodynamic imaging. PMID:27092045

  2. Quantitative Estimation of Tissue Blood Flow Rate.

    PubMed

    Tozer, Gillian M; Prise, Vivien E; Cunningham, Vincent J

    2016-01-01

    The rate of blood flow through a tissue (F) is a critical parameter for assessing the functional efficiency of a blood vessel network following angiogenesis. This chapter aims to provide the principles behind the estimation of F, how F relates to other commonly used measures of tissue perfusion, and a practical approach for estimating F in laboratory animals, using small readily diffusible and metabolically inert radio-tracers. The methods described require relatively nonspecialized equipment. However, the analytical descriptions apply equally to complementary techniques involving more sophisticated noninvasive imaging.Two techniques are described for the quantitative estimation of F based on measuring the rate of tissue uptake following intravenous administration of radioactive iodo-antipyrine (or other suitable tracer). The Tissue Equilibration Technique is the classical approach and the Indicator Fractionation Technique, which is simpler to perform, is a practical alternative in many cases. The experimental procedures and analytical methods for both techniques are given, as well as guidelines for choosing the most appropriate method.

  3. Thermal diffusivity estimation with quantitative pulsed phase thermography

    NASA Astrophysics Data System (ADS)

    Ospina-Borras, J. E.; Florez-Ospina, Juan F.; Benitez-Restrepo, H. D.; Maldague, X.

    2015-05-01

    Quantitative Pulsed Phase Thermography (PPT) has been only used to estimate defect parameters such as depth and thermal resistance. Here, we propose a thermal quadrupole based method that extends quantitative pulsed phase thermography. This approach estimates thermal diffusivity by solving a inversion problem based on non-linear squares estimation. This approach is tested with pulsed thermography data acquired from a composite sample. We compare our results with another technique established in time domain. The proposed quantitative analysis with PPT provides estimates of thermal diffusivity close to those obtained with the time domain approach. This estimation requires only the a priori knowledge of sample thickness.

  4. Quantitative estimation in Health Impact Assessment: Opportunities and challenges

    SciTech Connect

    Bhatia, Rajiv; Seto, Edmund

    2011-04-15

    Health Impact Assessment (HIA) considers multiple effects on health of policies, programs, plans and projects and thus requires the use of diverse analytic tools and sources of evidence. Quantitative estimation has desirable properties for the purpose of HIA but adequate tools for quantification exist currently for a limited number of health impacts and decision settings; furthermore, quantitative estimation generates thorny questions about the precision of estimates and the validity of methodological assumptions. In the United States, HIA has only recently emerged as an independent practice apart from integrated EIA, and this article aims to synthesize the experience with quantitative health effects estimation within that practice. We use examples identified through a scan of available identified instances of quantitative estimation in the U.S. practice experience to illustrate methods applied in different policy settings along with their strengths and limitations. We then discuss opportunity areas and practical considerations for the use of quantitative estimation in HIA.

  5. Activities: Visualization, Estimation, Computation.

    ERIC Educational Resources Information Center

    Maletsky, Evan M.

    1982-01-01

    The material is designed to help students build a cone model, visualize how its dimensions change as its shape changes, estimate maximum volume position, and develop problem-solving skills. Worksheets designed for duplication for classroom use are included. Part of the activity involves student analysis of a BASIC program. (MP)

  6. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  7. Quantitative cancer risk estimation for formaldehyde

    SciTech Connect

    Starr, T.B. )

    1990-03-01

    Of primary concern are irreversible effects, such as cancer induction, that formaldehyde exposure could have on human health. Dose-response data from human exposure situations would provide the most solid foundation for risk assessment, avoiding problematic extrapolations from the health effects seen in nonhuman species. However, epidemiologic studies of human formaldehyde exposure have provided little definitive information regarding dose-response. Reliance must consequently be placed on laboratory animal evidence. An impressive array of data points to significantly nonlinear relationships between rodent tumor incidence and administered dose, and between target tissue dose and administered dose (the latter for both rodents and Rhesus monkeys) following exposure to formaldehyde by inhalation. Disproportionately less formaldehyde binds covalently to the DNA of nasal respiratory epithelium at low than at high airborne concentrations. Use of this internal measure of delivered dose in analyses of rodent bioassay nasal tumor response yields multistage model estimates of low-dose risk, both point and upper bound, that are lower than equivalent estimates based upon airborne formaldehyde concentration. In addition, risk estimates obtained for Rhesus monkeys appear at least 10-fold lower than corresponding estimates for identically exposed Fischer-344 rats. 70 references.

  8. Bayesian inverse modeling for quantitative precipitation estimation

    NASA Astrophysics Data System (ADS)

    Schinagl, Katharina; Rieger, Christian; Simmer, Clemens; Xie, Xinxin; Friederichs, Petra

    2017-04-01

    Polarimetric radars provide us with a richness of precipitation related measurements. Especially the high spatial and temporal resolution make the data an important information, e.g. for hydrological modeling. However, uncertainties in the precipitation estimates are large. Their systematic assessment and quantification is thus of great importance. Polarimetric radar observables like horizontal and vertical reflectivity ZH and ZV , cross-correlation coefficient ρHV and specific differential phase KDP are related to the drop size distribution (DSD) in the scan. This relation is described by forward operators which are integrals over the DSD and scattering terms. Given the polarimetric observables, the respective forward operators and assumptions about the measurement errors, we investigate the uncertainty in the DSD parameter estimation and based on it the uncertainty of precipitation estimates. We assume that the DSD follows a Gamma model, N(D) = N0Dμ exp(-ΛD), where all three parameters are variable. This model allows us to account for the high variability of the DSD. We employ the framework of Bayesian inverse methods to derive the posterior distribution of the DSD parameters. The inverse problem is investigated in a simulated environment (SE) using the COSMO-DE numerical weather prediction model. The advantage of the SE is that - unlike in a real world application - we know the parameters we want to estimate. Thus, building the inverse model into the SE gives us the opportunity of verifying our results against the COSMO-simulated DSD-values.

  9. Quantitative Activities for Introductory Astronomy

    NASA Astrophysics Data System (ADS)

    Keohane, Jonathan W.; Bartlett, J. L.; Foy, J. P.

    2010-01-01

    We present a collection of short lecture-tutorial (or homework) activities, designed to be both quantitative and accessible to the introductory astronomy student. Each of these involves interpreting some real data, solving a problem using ratios and proportionalities, and making a conclusion based on the calculation. Selected titles include: "The Mass of Neptune” "The Temperature on Titan” "Rocks in the Early Solar System” "Comets Hitting Planets” "Ages of Meteorites” "How Flat are Saturn's Rings?” "Tides of the Sun and Moon on the Earth” "The Gliese 581 Solar System"; "Buckets in the Rain” "How Hot, Bright and Big is Betelgeuse?” "Bombs and the Sun” "What Forms Stars?” "Lifetimes of Cars and Stars” "The Mass of the Milky” "How Old is the Universe?” "Is The Universe Speeding up or Slowing Down?"

  10. Quantitative estimation of infarct size by simultaneous dual radionuclide single photon emission computed tomography: comparison with peak serum creatine kinase activity

    SciTech Connect

    Kawaguchi, K.; Sone, T.; Tsuboi, H.; Sassa, H.; Okumura, K.; Hashimoto, H.; Ito, T.; Satake, T. )

    1991-05-01

    To test the hypothesis that simultaneous dual energy single photon emission computed tomography (SPECT) with technetium-99m (99mTc) pyrophosphate and thallium-201 (201TI) can provide an accurate estimate of the size of myocardial infarction and to assess the correlation between infarct size and peak serum creatine kinase activity, 165 patients with acute myocardial infarction underwent SPECT 3.2 +/- 1.3 (SD) days after the onset of acute myocardial infarction. In the present study, the difference in the intensity of 99mTc-pyrophosphate accumulation was assumed to be attributable to difference in the volume of infarcted myocardium, and the infarct volume was corrected by the ratio of the myocardial activity to the osseous activity to quantify the intensity of 99mTc-pyrophosphate accumulation. The correlation of measured infarct volume with peak serum creatine kinase activity was significant (r = 0.60, p less than 0.01). There was also a significant linear correlation between the corrected infarct volume and peak serum creatine kinase activity (r = 0.71, p less than 0.01). Subgroup analysis showed a high correlation between corrected volume and peak creatine kinase activity in patients with anterior infarctions (r = 0.75, p less than 0.01) but a poor correlation in patients with inferior or posterior infarctions (r = 0.50, p less than 0.01). In both the early reperfusion and the no reperfusion groups, a good correlation was found between corrected infarct volume and peak serum creatine kinase activity (r = 0.76 and r = 0.76, respectively; p less than 0.01).

  11. Quantitative study of single molecule location estimation techniques

    PubMed Central

    Abraham, Anish V.; Ram, Sripad; Chao, Jerry; Ward, E. S.; Ober, Raimund J.

    2010-01-01

    Estimating the location of single molecules from microscopy images is a key step in many quantitative single molecule data analysis techniques. Different algorithms have been advocated for the fitting of single molecule data, particularly the nonlinear least squares and maximum likelihood estimators. Comparisons were carried out to assess the performance of these two algorithms in different scenarios. Our results show that both estimators, on average, are able to recover the true location of the single molecule in all scenarios we examined. However, in the absence of modeling inaccuracies and low noise levels, the maximum likelihood estimator is more accurate than the nonlinear least squares estimator, as measured by the standard deviations of its estimates, and attains the best possible accuracy achievable for the sets of imaging and experimental conditions that were tested. Although neither algorithm is consistently superior to the other in the presence of modeling inaccuracies or misspecifications, the maximum likelihood algorithm emerges as a robust estimator producing results with consistent accuracy across various model mismatches and misspecifications. At high noise levels, relative to the signal from the point source, neither algorithm has a clear accuracy advantage over the other. Comparisons were also carried out for two localization accuracy measures derived previously. Software packages with user-friendly graphical interfaces developed for single molecule location estimation (EstimationTool) and limit of the localization accuracy calculations (FandPLimitTool) are also discussed. PMID:20052043

  12. Quantitative study of single molecule location estimation techniques.

    PubMed

    Abraham, Anish V; Ram, Sripad; Chao, Jerry; Ward, E S; Ober, Raimund J

    2009-12-21

    Estimating the location of single molecules from microscopy images is a key step in many quantitative single molecule data analysis techniques. Different algorithms have been advocated for the fitting of single molecule data, particularly the nonlinear least squares and maximum likelihood estimators. Comparisons were carried out to assess the performance of these two algorithms in different scenarios. Our results show that both estimators, on average, are able to recover the true location of the single molecule in all scenarios we examined. However, in the absence of modeling inaccuracies and low noise levels, the maximum likelihood estimator is more accurate than the nonlinear least squares estimator, as measured by the standard deviations of its estimates, and attains the best possible accuracy achievable for the sets of imaging and experimental conditions that were tested. Although neither algorithm is consistently superior to the other in the presence of modeling inaccuracies or misspecifications, the maximum likelihood algorithm emerges as a robust estimator producing results with consistent accuracy across various model mismatches and misspecifications. At high noise levels, relative to the signal from the point source, neither algorithm has a clear accuracy advantage over the other. Comparisons were also carried out for two localization accuracy measures derived previously. Software packages with user-friendly graphical interfaces developed for single molecule location estimation (EstimationTool) and limit of the localization accuracy calculations (FandPLimitTool) are also discussed.

  13. The quantitative estimation of IT-related risk probabilities.

    PubMed

    Herrmann, Andrea

    2013-08-01

    How well can people estimate IT-related risk? Although estimating risk is a fundamental activity in software management and risk is the basis for many decisions, little is known about how well IT-related risk can be estimated at all. Therefore, we executed a risk estimation experiment with 36 participants. They estimated the probabilities of IT-related risks and we investigated the effect of the following factors on the quality of the risk estimation: the estimator's age, work experience in computing, (self-reported) safety awareness and previous experience with this risk, the absolute value of the risk's probability, and the effect of knowing the estimates of the other participants (see: Delphi method). Our main findings are: risk probabilities are difficult to estimate. Younger and inexperienced estimators were not significantly worse than older and more experienced estimators, but the older and more experienced subjects better used the knowledge gained by knowing the other estimators' results. Persons with higher safety awareness tend to overestimate risk probabilities, but can better estimate ordinal ranks of risk probabilities. Previous own experience with a risk leads to an overestimation of its probability (unlike in other fields like medicine or disasters, where experience with a disease leads to more realistic probability estimates and nonexperience to an underestimation).

  14. Quantitative linkage: a statistical procedure for its detection and estimation.

    PubMed

    Hill, A P

    1975-05-01

    A new approach for detecting and estimating quantitative linkage which uses sibship data is presented. Using a nested analysis of variance design (with marker genotype nested within sibship), it is shown that under the null hypothesis of no linkage, the expected between marker genotype within sibship mean square (EMSbeta) is equal to the expected within marker genotype within sibship mean square (EMSe), while under the alternative hypothesis of linkage, the first is greater than the second. Thus the regular F-ratio, MSbeta/MSe, can be used to test for quantitative linkage. This is true for both backcross and intercross matings and whether or not there is dominance at the marker locus. A second test involving the comparison of the within marker genotype within sibship variances is available for intercross matings. A maximum likelihood procedure for the estimation for the recombination frequency is also presented.

  15. Mapping quantitative trait Loci using generalized estimating equations.

    PubMed Central

    Lange, C; Whittaker, J C

    2001-01-01

    A number of statistical methods are now available to map quantitative trait loci (QTL) relative to markers. However, no existing methodology can simultaneously map QTL for multiple nonnormal traits. In this article we rectify this deficiency by developing a QTL-mapping approach based on generalized estimating equations (GEE). Simulation experiments are used to illustrate the application of the GEE-based approach. PMID:11729173

  16. Bayesian parameter estimation in spectral quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja

    2016-03-01

    Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.

  17. Quantitative volumetric breast density estimation using phase contrast mammography

    NASA Astrophysics Data System (ADS)

    Wang, Zhentian; Hauser, Nik; Kubik-Huch, Rahel A.; D'Isidoro, Fabio; Stampanoni, Marco

    2015-05-01

    Phase contrast mammography using a grating interferometer is an emerging technology for breast imaging. It provides complementary information to the conventional absorption-based methods. Additional diagnostic values could be further obtained by retrieving quantitative information from the three physical signals (absorption, differential phase and small-angle scattering) yielded simultaneously. We report a non-parametric quantitative volumetric breast density estimation method by exploiting the ratio (dubbed the R value) of the absorption signal to the small-angle scattering signal. The R value is used to determine breast composition and the volumetric breast density (VBD) of the whole breast is obtained analytically by deducing the relationship between the R value and the pixel-wise breast density. The proposed method is tested by a phantom study and a group of 27 mastectomy samples. In the clinical evaluation, the estimated VBD values from both cranio-caudal (CC) and anterior-posterior (AP) views are compared with the ACR scores given by radiologists to the pre-surgical mammograms. The results show that the estimated VBD results using the proposed method are consistent with the pre-surgical ACR scores, indicating the effectiveness of this method in breast density estimation. A positive correlation is found between the estimated VBD and the diagnostic ACR score for both the CC view (p=0.033 ) and AP view (p=0.001 ). A linear regression between the results of the CC view and AP view showed a correlation coefficient γ = 0.77, which indicates the robustness of the proposed method and the quantitative character of the additional information obtained with our approach.

  18. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  19. Quantitative estimation of poikilocytosis by the coherent optical method

    NASA Astrophysics Data System (ADS)

    Safonova, Larisa P.; Samorodov, Andrey V.; Spiridonov, Igor N.

    2000-05-01

    The investigation upon the necessity and the reliability required of the determination of the poikilocytosis in hematology has shown that existing techniques suffer from grave shortcomings. To determine a deviation of the erythrocytes' form from the normal (rounded) one in blood smears it is expedient to use an integrative estimate. The algorithm which is based on the correlation between erythrocyte morphological parameters with properties of the spatial-frequency spectrum of blood smear is suggested. During analytical and experimental research an integrative form parameter (IFP) which characterizes the increase of the relative concentration of cells with the changed form over 5% and the predominating type of poikilocytes was suggested. An algorithm of statistically reliable estimation of the IFP on the standard stained blood smears has been developed. To provide the quantitative characterization of the morphological features of cells a form vector has been proposed, and its validity for poikilocytes differentiation was shown.

  20. Handling uncertainty in quantitative estimates in integrated resource planning

    SciTech Connect

    Tonn, B.E.; Wagner, C.G.

    1995-01-01

    This report addresses uncertainty in Integrated Resource Planning (IRP). IRP is a planning and decisionmaking process employed by utilities, usually at the behest of Public Utility Commissions (PUCs), to develop plans to ensure that utilities have resources necessary to meet consumer demand at reasonable cost. IRP has been used to assist utilities in developing plans that include not only traditional electricity supply options but also demand-side management (DSM) options. Uncertainty is a major issue for IRP. Future values for numerous important variables (e.g., future fuel prices, future electricity demand, stringency of future environmental regulations) cannot ever be known with certainty. Many economically significant decisions are so unique that statistically-based probabilities cannot even be calculated. The entire utility strategic planning process, including IRP, encompasses different types of decisions that are made with different time horizons and at different points in time. Because of fundamental pressures for change in the industry, including competition in generation, gone is the time when utilities could easily predict increases in demand, enjoy long lead times to bring on new capacity, and bank on steady profits. The purpose of this report is to address in detail one aspect of uncertainty in IRP: Dealing with Uncertainty in Quantitative Estimates, such as the future demand for electricity or the cost to produce a mega-watt (MW) of power. A theme which runs throughout the report is that every effort must be made to honestly represent what is known about a variable that can be used to estimate its value, what cannot be known, and what is not known due to operational constraints. Applying this philosophy to the representation of uncertainty in quantitative estimates, it is argued that imprecise probabilities are superior to classical probabilities for IRP.

  1. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    NASA Astrophysics Data System (ADS)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  2. The Evolution of Solar Flux: Quantitative Estimates for Planetary Studies

    NASA Astrophysics Data System (ADS)

    Claire, M.; Sheets, J.; Cohen, M.; Ribas, I.; Meadows, V. S.; Catling, D. C.

    2012-12-01

    The Sun has a profound impact on planetary atmospheres, driving such diverse processes as the vertical temperature profile, molecular reaction rates, and atmospheric escape. Understanding the time-dependence of the solar flux is therefore essential to understanding atmospheric evolution of planets and satellites in the solar system. We present numerical models of the solar flux applicable temporally and spatially throughout the solar system (Claire et al. ApJ, 2012, in press.) We combine data from the Sun and solar analogs to estimate enhanced FUV and Xray continuum and strong line fluxes for the young Sun. In addition, we describe a new parameterization for the near UV, where both the chromosphere and photosphere contribute to the flux, and use Kurucz models to estimate variable visible and infrared fluxes. The modeled fluxes are valid at nanometer resolution from 0.1 nm through the infrared, and from 0.6 Gyr through 6.7 Gyr, with extensions from the solar zero age main sequence to 8.0 Gyr (subject to additional uncertainties). This work enables quantitative estimates of the wavelength dependence of solar flux for a range of paleodates that are relevant to studies of the chemical evolution of planetary atmospheres in the solar system (or around other G-type stars). We apply this parameterization to an early Earth photochemical model, which reveals changes in photolysis reaction rates significant larger than the intrinsic model uncertainties.

  3. Quantitative assessment of growth plate activity

    SciTech Connect

    Harcke, H.T.; Macy, N.J.; Mandell, G.A.; MacEwen, G.D.

    1984-01-01

    In the immature skeleton the physis or growth plate is the area of bone least able to withstand external forces and is therefore prone to trauma. Such trauma often leads to premature closure of the plate and results in limb shortening and/or angular deformity (varus or valgus). Active localization of bone seeking tracers in the physis makes bone scintigraphy an excellent method for assessing growth plate physiology. To be most effective, however, physeal activity should be quantified so that serial evaluations are accurate and comparable. The authors have developed a quantitative method for assessing physeal activity and have applied it ot the hip and knee. Using computer acquired pinhole images of the abnormal and contralateral normal joints, ten regions of interest are placed at key locations around each joint and comparative ratios are generated to form a growth plate profile. The ratios compare segmental physeal activity to total growth plate activity on both ipsilateral and contralateral sides and to adjacent bone. In 25 patients, ages 2 to 15 years, with angular deformities of the legs secondary to trauma, Blount's disease, and Perthes disease, this technique is able to differentiate abnormal segmental physeal activity. This is important since plate closure does not usually occur uniformly across the physis. The technique may permit the use of scintigraphy in the prediction of early closure through the quantitative analysis of serial studies.

  4. Cleaning validation: quantitative estimation of atorvastatin in production area.

    PubMed

    Moradiya, Mehul R; Solanki, Kamlesh P; Shah, Purvi A; Patel, Kalpana G; Thakkar, Vaishali T; Gandhi, Tejal R

    2013-01-01

    Carefully designed cleaning validation and its evaluation can ensure that residues of active pharmaceutical ingredient will not carry over and cross-contaminate the subsequent product. UV spectrophotometric and total organic carbon-solid sample module (TOC-SSM) method was developed and validated for the verification and determination of atorvastatin residues in the production area and to confirm the efficiency of the cleaning procedure as per ICH guideline. Atorvastatin was selected on the basis of a worst-case rating approach. It exhibited good linearity in the range of 5 to 25 μg/mL for UV spectrophotometric and 7300 to 83800 μg for the TOC-SSM method. The limit of detection was 0.419 μg/mL and 4.19 μg in the UV spectrophotometric and TOC-SSM methods, respectively. The limit of quantitation was 1.267 μg/mL and 12.69 μg in UV spectrophotometric and TOC-SSM methods, respectively. Percentage recovery from spiked stainless steel plates was found to be 95.37% and 92.82% in UV spectrophotometric and TOC-SSM methods, respectively. The calculated limit of acceptance per swab for atorvastatin (35.65 μg/swab) was not exceeded during three consecutive batches of production after cleaning procedure. Both proposed methods are suitable for quantitative determination of atorvastatin on manufacturing equipment surfaces well below the limit of contamination. The ease of sample preparation permits fast and efficient application of the proposed methods in quantitation of atorvastatin residue with precision and accuracy. Above all, the methodology is of low cost, and is a simple and less time-consuming alternative to confirm the efficiency of the cleaning procedure in pharmaceutical industries. Carefully designed cleaning validation and its evaluation can ensure that residues of active pharmaceutical ingredient will not carry over and cross-contaminate the subsequent product. Atorvastatin was identified as a potential candidate among existing drug substances in production

  5. Quantitative Nanostructure-Activity Relationship (QNAR) Modeling

    PubMed Central

    Fourches, Denis; Pu, Dongqiuye; Tassa, Carlos; Weissleder, Ralph; Shaw, Stanley Y.; Mumper, Russell J.; Tropsha, Alexander

    2010-01-01

    Evaluation of biological effects, both desired and undesired, caused by Manufactured NanoParticles (MNPs) is of critical importance for nanotechnology. Experimental studies, especially toxicological, are time-consuming, costly, and often impractical, calling for the development of efficient computational approaches capable of predicting biological effects of MNPs. To this end, we have investigated the potential of cheminformatics methods such as Quantitative Structure – Activity Relationship (QSAR) modeling to establish statistically significant relationships between measured biological activity profiles of MNPs and their physical, chemical, and geometrical properties, either measured experimentally or computed from the structure of MNPs. To reflect the context of the study, we termed our approach Quantitative Nanostructure-Activity Relationship (QNAR) modeling. We have employed two representative sets of MNPs studied recently using in vitro cell-based assays: (i) 51 various MNPs with diverse metal cores (PNAS, 2008, 105, pp 7387–7392) and (ii) 109 MNPs with similar core but diverse surface modifiers (Nat. Biotechnol., 2005, 23, pp 1418–1423). We have generated QNAR models using machine learning approaches such as Support Vector Machine (SVM)-based classification and k Nearest Neighbors (kNN)-based regression; their external prediction power was shown to be as high as 73% for classification modeling and R2 of 0.72 for regression modeling. Our results suggest that QNAR models can be employed for: (i) predicting biological activity profiles of novel nanomaterials, and (ii) prioritizing the design and manufacturing of nanomaterials towards better and safer products. PMID:20857979

  6. Novel whole brain segmentation and volume estimation using quantitative MRI.

    PubMed

    West, J; Warntjes, J B M; Lundberg, P

    2012-05-01

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R(1), the transverse relaxation rate R(2) and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R(1), R(2) and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R(1)-R(2)-PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. • A method for segmenting the brain and estimating tissue volume is presented • This method measures white matter, grey matter, cerebrospinal fluid and remaining tissue • The method calculates tissue fractions in voxel, thus accounting for partial volume • Repeatability was 2.2% for total brain volume with imaging resolution <2.0 mm.

  7. Estimation of methanogen biomass via quantitation of coenzyme M

    USGS Publications Warehouse

    Elias, Dwayne A.; Krumholz, Lee R.; Tanner, Ralph S.; Suflita, Joseph M.

    1999-01-01

    Determination of the role of methanogenic bacteria in an anaerobic ecosystem often requires quantitation of the organisms. Because of the extreme oxygen sensitivity of these organisms and the inherent limitations of cultural techniques, an accurate biomass value is very difficult to obtain. We standardized a simple method for estimating methanogen biomass in a variety of environmental matrices. In this procedure we used the thiol biomarker coenzyme M (CoM) (2-mercaptoethanesulfonic acid), which is known to be present in all methanogenic bacteria. A high-performance liquid chromatography-based method for detecting thiols in pore water (A. Vairavamurthy and M. Mopper, Anal. Chim. Acta 78:363–370, 1990) was modified in order to quantify CoM in pure cultures, sediments, and sewage water samples. The identity of the CoM derivative was verified by using liquid chromatography-mass spectroscopy. The assay was linear for CoM amounts ranging from 2 to 2,000 pmol, and the detection limit was 2 pmol of CoM/ml of sample. CoM was not adsorbed to sediments. The methanogens tested contained an average of 19.5 nmol of CoM/mg of protein and 0.39 ± 0.07 fmol of CoM/cell. Environmental samples contained an average of 0.41 ± 0.17 fmol/cell based on most-probable-number estimates. CoM was extracted by using 1% tri-(N)-butylphosphine in isopropanol. More than 90% of the CoM was recovered from pure cultures and environmental samples. We observed no interference from sediments in the CoM recovery process, and the method could be completed aerobically within 3 h. Freezing sediment samples resulted in 46 to 83% decreases in the amounts of detectable CoM, whereas freezing had no effect on the amounts of CoM determined in pure cultures. The method described here provides a quick and relatively simple way to estimate methanogenic biomass.

  8. Quantitative estimates of the volatility of ambient organic aerosol

    NASA Astrophysics Data System (ADS)

    Cappa, C. D.; Jimenez, J. L.

    2010-01-01

    Measurements of the sensitivity of organic aerosol (OA, and its components) mass to changes in temperature were recently reported by Huffman et al. (2009) using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS) system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets") are determined using several assumptions as to the enthalpy of vaporization (ΔHvap). We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions, on the order of 50-80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol) and lowest for the high (ΔHvap = 150 kJ/mol) assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009) has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the high and variable ΔHvap assumptions. Our results also show that the amount of semivolatile gas-phase organics in equilibrium with the OA could range from ~20

  9. Quantitative estimates of the volatility of ambient organic aerosol

    NASA Astrophysics Data System (ADS)

    Cappa, C. D.; Jimenez, J. L.

    2010-06-01

    Measurements of the sensitivity of organic aerosol (OA, and its components) mass to changes in temperature were recently reported by Huffman et al.~(2009) using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS) system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets") are determined using several assumptions as to the enthalpy of vaporization (ΔHvap). We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions; on the order of 50-80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol) and lowest for the high (ΔHvap = 150 kJ/mol) assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009) has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the high and variable ΔHvap assumptions. Our results also show that the amount of semivolatile gas-phase organics in equilibrium with the OA could range from ~20

  10. Quantitative modeling of multiscale neural activity

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Rennie, Christopher J.

    2007-01-01

    The electrical activity of the brain has been observed for over a century and is widely used to probe brain function and disorders, chiefly through the electroencephalogram (EEG) recorded by electrodes on the scalp. However, the connections between physiology and EEGs have been chiefly qualitative until recently, and most uses of the EEG have been based on phenomenological correlations. A quantitative mean-field model of brain electrical activity is described that spans the range of physiological and anatomical scales from microscopic synapses to the whole brain. Its parameters measure quantities such as synaptic strengths, signal delays, cellular time constants, and neural ranges, and are all constrained by independent physiological measurements. Application of standard techniques from wave physics allows successful predictions to be made of a wide range of EEG phenomena, including time series and spectra, evoked responses to stimuli, dependence on arousal state, seizure dynamics, and relationships to functional magnetic resonance imaging (fMRI). Fitting to experimental data also enables physiological parameters to be infered, giving a new noninvasive window into brain function, especially when referenced to a standardized database of subjects. Modifications of the core model to treat mm-scale patchy interconnections in the visual cortex are also described, and it is shown that resulting waves obey the Schroedinger equation. This opens the possibility of classical cortical analogs of quantum phenomena.

  11. Transient stochastic downscaling of quantitative precipitation estimates for hydrological applications

    NASA Astrophysics Data System (ADS)

    Nogueira, M.; Barros, A. P.

    2015-10-01

    Rainfall fields are heavily thresholded and highly intermittent resulting in large areas of zero values. This deforms their stochastic spatial scale-invariant behavior, introducing scaling breaks and curvature in the spatial scale spectrum. To address this problem, spatial scaling analysis was performed inside continuous rainfall features (CRFs) delineated via cluster analysis. The results show that CRFs from single realizations of hourly rainfall display ubiquitous multifractal behavior that holds over a wide range of scales (from ≈1 km up to 100's km). The results further show that the aggregate scaling behavior of rainfall fields is intrinsically transient with the scaling parameters explicitly dependent on the atmospheric environment. These findings provide a framework for robust stochastic downscaling, bridging the gap between spatial scales of observed and simulated rainfall fields and the high-resolution requirements of hydrometeorological and hydrological studies. Here, a fractal downscaling algorithm adapted to CRFs is presented and applied to generate stochastically downscaled hourly rainfall products from radar derived Stage IV (∼4 km grid resolution) quantitative precipitation estimates (QPE) over the Integrated Precipitation and Hydrology Experiment (IPHEx) domain in the southeast USA. The methodology can produce large ensembles of statistically robust high-resolution fields without additional data or any calibration requirements, conserving the coarse resolution information and generating coherent small-scale variability and field statistics, hence adding value to the original fields. Moreover, it is computationally inexpensive enabling fast production of high-resolution rainfall realizations with latency adequate for forecasting applications. When the transient nature of the scaling behavior is considered, the results show a better ability to reproduce the statistical structure of observed rainfall compared to using fixed scaling parameters

  12. Quantitative CT: technique dependence of volume estimation on pulmonary nodules.

    PubMed

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-07

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  13. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  14. Quantitative estimation of sampling uncertainties for mycotoxins in cereal shipments.

    PubMed

    Bourgeois, F S; Lyman, G J

    2012-01-01

    Many countries receive shipments of bulk cereals from primary producers. There is a volume of work that is on-going that seeks to arrive at appropriate standards for the quality of the shipments and the means to assess the shipments as they are out-loaded. Of concern are mycotoxin and heavy metal levels, pesticide and herbicide residue levels, and contamination by genetically modified organisms (GMOs). As the ability to quantify these contaminants improves through improved analytical techniques, the sampling methodologies applied to the shipments must also keep pace to ensure that the uncertainties attached to the sampling procedures do not overwhelm the analytical uncertainties. There is a need to understand and quantify sampling uncertainties under varying conditions of contamination. The analysis required is statistical and is challenging as the nature of the distribution of contaminants within a shipment is not well understood; very limited data exist. Limited work has been undertaken to quantify the variability of the contaminant concentrations in the flow of grain coming from a ship and the impact that this has on the variance of sampling. Relatively recent work by Paoletti et al. in 2006 [Paoletti C, Heissenberger A, Mazzara M, Larcher S, Grazioli E, Corbisier P, Hess N, Berben G, Lübeck PS, De Loose M, et al. 2006. Kernel lot distribution assessment (KeLDA): a study on the distribution of GMO in large soybean shipments. Eur Food Res Tech. 224:129-139] provides some insight into the variation in GMO concentrations in soybeans on cargo out-turn. Paoletti et al. analysed the data using correlogram analysis with the objective of quantifying the sampling uncertainty (variance) that attaches to the final cargo analysis, but this is only one possible means of quantifying sampling uncertainty. It is possible that in many cases the levels of contamination passing the sampler on out-loading are essentially random, negating the value of variographic quantitation of

  15. Methodology significantly affects genome size estimates: quantitative evidence using bryophytes.

    PubMed

    Bainard, Jillian D; Fazekas, Aron J; Newmaster, Steven G

    2010-08-01

    Flow cytometry (FCM) is commonly used to determine plant genome size estimates. Methodology has improved and changed during the past three decades, and researchers are encouraged to optimize protocols for their specific application. However, this step is typically omitted or undescribed in the current plant genome size literature, and this omission could have serious consequences for the genome size estimates obtained. Using four bryophyte species (Brachythecium velutinum, Fissidens taxifolius, Hedwigia ciliata, and Thuidium minutulum), three methodological approaches to the use of FCM in plant genome size estimation were tested. These included nine different buffers (Baranyi's, de Laat's, Galbraith's, General Purpose, LB01, MgSO(4), Otto's, Tris.MgCl(2), and Woody Plant), seven propidium iodide (PI) staining periods (5, 10, 15, 20, 45, 60, and 120 min), and six PI concentrations (10, 25, 50, 100, 150, and 200 microg ml(-1)). Buffer, staining period and staining concentration all had a statistically significant effect (P = 0.05) on the genome size estimates obtained for all four species. Buffer choice and PI concentration had the greatest effect, altering the 1C-values by as much as 8% and 14%, respectively. As well, the quality of the data varied with the different methodology used. Using the methodology determined to be the most accurate in this study (LB01 buffer and PI staining for 20 min at 150 microg ml(-1)), three new genome size estimates were obtained: B. velutinum: 0.46 pg, H. ciliata: 0.30 pg, and T. minutulum: 0.46 pg. While the peak quality of flow cytometry histograms is important, researchers must consider that changes in methodology can also affect the relative peak positions and therefore the genome size estimates obtained for plants using FCM.

  16. Quantitative genetic tools for insecticide resistance risk assessment: estimating the heritability of resistance

    Treesearch

    Michael J. Firko; Jane Leslie Hayes

    1990-01-01

    Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...

  17. The Centiloid Project: standardizing quantitative amyloid plaque estimation by PET.

    PubMed

    Klunk, William E; Koeppe, Robert A; Price, Julie C; Benzinger, Tammie L; Devous, Michael D; Jagust, William J; Johnson, Keith A; Mathis, Chester A; Minhas, Davneet; Pontecorvo, Michael J; Rowe, Christopher C; Skovronsky, Daniel M; Mintun, Mark A

    2015-01-01

    Although amyloid imaging with PiB-PET ([C-11]Pittsburgh Compound-B positron emission tomography), and now with F-18-labeled tracers, has produced remarkably consistent qualitative findings across a large number of centers, there has been considerable variability in the exact numbers reported as quantitative outcome measures of tracer retention. In some cases this is as trivial as the choice of units, in some cases it is scanner dependent, and of course, different tracers yield different numbers. Our working group was formed to standardize quantitative amyloid imaging measures by scaling the outcome of each particular analysis method or tracer to a 0 to 100 scale, anchored by young controls (≤ 45 years) and typical Alzheimer's disease patients. The units of this scale have been named "Centiloids." Basically, we describe a "standard" method of analyzing PiB PET data and then a method for scaling any "nonstandard" method of PiB PET analysis (or any other tracer) to the Centiloid scale. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  18. The Centiloid Project: Standardizing Quantitative Amyloid Plaque Estimation by PET

    PubMed Central

    Klunk, William E.; Koeppe, Robert A.; Price, Julie C.; Benzinger, Tammie; Devous, Michael D.; Jagust, William; Johnson, Keith; Mathis, Chester A.; Minhas, Davneet; Pontecorvo, Michael J.; Rowe, Christopher C.; Skovronsky, Daniel; Mintun, Mark

    2014-01-01

    Although amyloid imaging with PiB-PET, and now with F-18-labelled tracers, has produced remarkably consistent qualitative findings across a large number of centers, there has been considerable variability in the exact numbers reported as quantitative outcome measures of tracer retention. In some cases this is as trivial as the choice of units, in some cases it is scanner dependent, and of course, different tracers yield different numbers. Our working group was formed to standardize quantitative amyloid imaging measures by scaling the outcome of each particular analysis method or tracer to a 0 to 100 scale, anchored by young controls (≤45 years) and typical Alzheimer’s disease patients. The units of this scale have been named “Centiloids.” Basically, we describe a “standard” method of analyzing PiB PET data and then a method for scaling any “non-standard” method of PiB PET analysis (or any other tracer) to the Centiloid scale. PMID:25443857

  19. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  20. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters.

    PubMed

    Cheng, Lishui; Hobbs, Robert F; Segars, Paul W; Sgouros, George; Frey, Eric C

    2013-06-07

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  1. Distance estimation from acceleration for quantitative evaluation of Parkinson tremor.

    PubMed

    Jeon, Hyoseon; Kim, Sang Kyong; Jeon, BeomSeok; Park, Kwang Suk

    2011-01-01

    The purpose of this paper is to assess Parkinson tremor estimating actual distance amplitude. We propose a practical, useful and simple method for evaluating Parkinson tremor with distance value. We measured resting tremor of 7 Parkinson Disease (PD) patients with triaxial accelerometer. Resting tremor of participants was diagnosed by Unified Parkinson's Disease Rating Scale (UPDRS) by neurologist. First, we segmented acceleration signal during 7 seconds from recorded data. To estimate a displacement of tremor, we performed double integration from the acceleration. Prior to double integration, moving average method was used to reduce an error of integral constant. After estimation of displacement, we calculated tremor distance during 1s from segmented signal using Euclidean distance. We evaluated the distance values compared with UPDRS. Averaged moving distance during 1 second corresponding to UPDRS 1 was 11.52 mm, that of UPDRS 2 was 33.58 mm and tremor distance of UPDRS 3 was 382.22 mm. Estimated moving distance during 1s was proportional to clinical rating scale--UPDRS.

  2. Quantitative evaluation of activation state in functional brain imaging.

    PubMed

    Hu, Zhenghui; Ni, Pengyu; Liu, Cong; Zhao, Xiaohu; Liu, Huafeng; Shi, Pengcheng

    2012-10-01

    Neuronal activity can evoke the hemodynamic change that gives rise to the observed functional magnetic resonance imaging (fMRI) signal. These increases are also regulated by the resting blood volume fraction (V (0)) associated with regional vasculature. The activation locus detected by means of the change in the blood-oxygen-level-dependent (BOLD) signal intensity thereby may deviate from the actual active site due to varied vascular density in the cortex. Furthermore, conventional detection techniques evaluate the statistical significance of the hemodynamic observations. In this sense, the significance level relies not only upon the intensity of the BOLD signal change, but also upon the spatially inhomogeneous fMRI noise distribution that complicates the expression of the results. In this paper, we propose a quantitative strategy for the calibration of activation states to address these challenging problems. The quantitative assessment is based on the estimated neuronal efficacy parameter [Formula: see text] of the hemodynamic model in a voxel-by-voxel way. It is partly immune to the inhomogeneous fMRI noise by virtue of the strength of the optimization strategy. Moreover, it is easy to incorporate regional vascular information into the activation detection procedure. By combining MR angiography images, this approach can remove large vessel contamination in fMRI signals, and provide more accurate functional localization than classical statistical techniques for clinical applications. It is also helpful to investigate the nonlinear nature of the coupling between synaptic activity and the evoked BOLD response. The proposed method might be considered as a potentially useful complement to existing statistical approaches.

  3. Quantitative estimation of source complexity in tsunami-source inversion

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.

    2016-04-01

    This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better

  4. Quantitative estimates of precision for molecular isotopic measurements.

    PubMed

    Jasper, J P

    2001-01-01

    At least three methods of calculating the random errors or variance of molecular isotopic data are presently in use. The major components of variance are differentiated and quantified here into least three to four individual components. The measurement of error of the analyte relative to a working (whether an internal or an external) standard is quantified via the statistical pooled estimate of error. A statistical method for calculating the total variance associated with the difference of two individual isotopic compositions from two isotope laboratories is given, including the variances of the laboratory (secondary) and working standards, as well as those of the analytes. An abbreviated method for estimation of of error typical for chromatographic/isotope mass spectrometric methods is also presented. Copyright 2001 John Wiley & Sons, Ltd.

  5. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  6. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  7. The APEX Quantitative Proteomics Tool: Generating protein quantitation estimates from LC-MS/MS proteomics results

    PubMed Central

    Braisted, John C; Kuntumalla, Srilatha; Vogel, Christine; Marcotte, Edward M; Rodrigues, Alan R; Wang, Rong; Huang, Shih-Ting; Ferlanti, Erik S; Saeed, Alexander I; Fleischmann, Robert D; Peterson, Scott N; Pieper, Rembert

    2008-01-01

    Background Mass spectrometry (MS) based label-free protein quantitation has mainly focused on analysis of ion peak heights and peptide spectral counts. Most analyses of tandem mass spectrometry (MS/MS) data begin with an enzymatic digestion of a complex protein mixture to generate smaller peptides that can be separated and identified by an MS/MS instrument. Peptide spectral counting techniques attempt to quantify protein abundance by counting the number of detected tryptic peptides and their corresponding MS spectra. However, spectral counting is confounded by the fact that peptide physicochemical properties severely affect MS detection resulting in each peptide having a different detection probability. Lu et al. (2007) described a modified spectral counting technique, Absolute Protein Expression (APEX), which improves on basic spectral counting methods by including a correction factor for each protein (called Oi value) that accounts for variable peptide detection by MS techniques. The technique uses machine learning classification to derive peptide detection probabilities that are used to predict the number of tryptic peptides expected to be detected for one molecule of a particular protein (Oi). This predicted spectral count is compared to the protein's observed MS total spectral count during APEX computation of protein abundances. Results The APEX Quantitative Proteomics Tool, introduced here, is a free open source Java application that supports the APEX protein quantitation technique. The APEX tool uses data from standard tandem mass spectrometry proteomics experiments and provides computational support for APEX protein abundance quantitation through a set of graphical user interfaces that partition thparameter controls for the various processing tasks. The tool also provides a Z-score analysis for identification of significant differential protein expression, a utility to assess APEX classifier performance via cross validation, and a utility to merge multiple

  8. A Quantitative Model to Estimate Drug Resistance in Pathogens

    PubMed Central

    Baker, Frazier N.; Cushion, Melanie T.; Porollo, Aleksey

    2016-01-01

    Pneumocystis pneumonia (PCP) is an opportunistic infection that occurs in humans and other mammals with debilitated immune systems. These infections are caused by fungi in the genus Pneumocystis, which are not susceptible to standard antifungal agents. Despite decades of research and drug development, the primary treatment and prophylaxis for PCP remains a combination of trimethoprim (TMP) and sulfamethoxazole (SMX) that targets two enzymes in folic acid biosynthesis, dihydrofolate reductase (DHFR) and dihydropteroate synthase (DHPS), respectively. There is growing evidence of emerging resistance by Pneumocystis jirovecii (the species that infects humans) to TMP-SMX associated with mutations in the targeted enzymes. In the present study, we report the development of an accurate quantitative model to predict changes in the binding affinity of inhibitors (Ki, IC50) to the mutated proteins. The model is based on evolutionary information and amino acid covariance analysis. Predicted changes in binding affinity upon mutations highly correlate with the experimentally measured data. While trained on Pneumocystis jirovecii DHFR/TMP data, the model shows similar or better performance when evaluated on the resistance data for a different inhibitor of PjDFHR, another drug/target pair (PjDHPS/SMX) and another organism (Staphylococcus aureus DHFR/TMP). Therefore, we anticipate that the developed prediction model will be useful in the evaluation of possible resistance of the newly sequenced variants of the pathogen and can be extended to other drug targets and organisms. PMID:28018911

  9. [Quantitative estimation of evapotranspiration from Tahe forest ecosystem, Northeast China].

    PubMed

    Qu, Di; Fan, Wen-Yi; Yang, Jin-Ming; Wang, Xu-Peng

    2014-06-01

    Evapotranspiration (ET) is an important parameter of agriculture, meteorology and hydrology research, and also an important part of the global hydrological cycle. This paper applied the improved DHSVM distributed hydrological model to estimate daily ET of Tahe area in 2007 using leaf area index and other surface data extracted TM remote sensing data, and slope, aspect and other topographic indices obtained by using the digital elevation model. The relationship between daily ET and daily watershed outlet flow was built by the BP neural network, and a water balance equation was established for the studied watershed, together to test the accuracy of the estimation. The results showed that the model could be applied in the study area. The annual total ET of Tahe watershed was 234.01 mm. ET had a significant seasonal variation. The ET had the highest value in summer and the average daily ET value was 1.56 mm. The average daily ET in autumn and spring were 0.30, 0.29 mm, respectively, and winter had the lowest ET value. Land cover type had a great effect on ET value, and the broadleaf forest had a higher ET ability than the mixed forest, followed by the needle leaf forest.

  10. Quantitative Method of Measuring Metastatic Activity

    NASA Technical Reports Server (NTRS)

    Morrison, Dennis R. (Inventor)

    1999-01-01

    The metastatic potential of tumors can be evaluated by the quantitative detection of urokinase and DNA. The cell sample selected for examination is analyzed for the presence of high levels of urokinase and abnormal DNA using analytical flow cytometry and digital image analysis. Other factors such as membrane associated uroldnase, increased DNA synthesis rates and certain receptors can be used in the method for detection of potentially invasive tumors.

  11. Possibility of quantitative estimation of blood cell forms by the spatial-frequency spectrum analysis

    NASA Astrophysics Data System (ADS)

    Spiridonov, Igor N.; Safonova, Larisa P.; Samorodov, Andrey V.

    2000-05-01

    At present in hematology there are no quantitative estimates of such important for the cell classification parameters: cell form and nuclear form. Due to the absence of the correlation between morphological parameters and parameters measured by hemoanalyzers, both flow cytometers and computer recognition systems, do not provide the completeness of the clinical blood analysis. Analysis of the spatial-frequency spectra of blood samples (smears and liquid probes) permit the estimate the forms quantitatively. On the results of theoretical and experimental researches carried out an algorithm of the form quantitative estimation by means of SFS parameters has been created. The criteria of the quality of these estimates have been proposed. A test bench based on the coherent optical and digital processors. The received results could be applied for the automated classification of ether normal or pathological blood cells in the standard blood smears.

  12. Quantitative estimates of the surface habitability of Kepler-452b

    NASA Astrophysics Data System (ADS)

    Silva, Laura; Vladilo, Giovanni; Murante, Giuseppe; Provenzale, Antonello

    2017-09-01

    Kepler-452b is currently the best example of an Earth-size planet in the habitable zone of a sun-like star, a type of planet whose number of detections is expected to increase in the future. Searching for biosignatures in the supposedly thin atmospheres of these planets is a challenging goal that requires a careful selection of the targets. Under the assumption of a rocky-dominated nature for Kepler-452b, we considered it as a test case to calculate a temperature-dependent habitability index, h050, designed to maximize the potential presence of biosignature-producing activity. The surface temperature has been computed for a broad range of climate factors using a climate model designed for terrestrial-type exoplanets. After fixing the planetary data according to the experimental results, we changed the surface gravity, CO2 abundance, surface pressure, orbital eccentricity, rotation period, axis obliquity and ocean fraction within the range of validity of our model. For most choices of parameters, we find habitable solutions with h050 > 0.2 only for CO2 partial pressure p_CO_2 ≲ 0.04 bar. At this limiting value of CO2 abundance, the planet is still habitable if the total pressure is p ≲ 2 bar. In all cases, the habitability drops for eccentricity e ≳ 0.3. Changes of rotation period and obliquity affect the habitability through their impact on the equator-pole temperature difference rather than on the mean global temperature. We calculated the variation of h050 resulting from the luminosity evolution of the host star for a wide range of input parameters. Only a small combination of parameters yields habitability-weighted lifetimes ≳2 Gyr, sufficiently long to develop atmospheric biosignatures still detectable at the present time.

  13. Quantitative ultrasound estimates from populations of scatterers with continuous size distributions - Effects of the size estimator algorithm

    PubMed Central

    Oelze, Michael

    2012-01-01

    Quantitative ultrasonic techniques using backscatter coefficients (BSCs) may fail to produce physically meaningful estimates of effective scatterer diameter (ESD) when the analysis media contains scatterers of different sizes. In this work, three different estimator algorithms were used to produce estimates of ESD. The performance of the three estimators was compared over different frequency bands using simulations and experiments with physical phantoms. All estimators produced ESD estimates by comparing the estimated BSCs with a scattering model based on the backscattering cross-section of a single spherical fluid scatterer. The first estimator consisted of minimizing the average square deviation of the ratio between the estimated BSCs and the scattering model with both expressed in decibels. The second and third estimators consisted of minimizing the mean square error between the estimated BSCs and a linear transformation of the scattering model with and without considering an intercept, respectively. Simulations were conducted over several analysis bandwidths between 1 and 40 MHz from populations of scatterers with either a uniform size distribution or a distribution based on the inverse cubic of the size. Diameters of the distributions ranged between [25, 100], [25, 50], [50, 100], and [50, 75] μm. Experimental results were obtained from two gelatin phantoms containing Sephadex spheres ranging in diameter from 28 to 130 μm and 70 to 130 μm, respectively, and 5, 7.5, 10, and 13 MHz focused transducers. Significant differences in the performances of the ESD estimator algorithms as a function of the analysis frequency were observed. Specifically, the third estimator exhibited potential to produce physically meaningful ESD estimates even for large ka values when using a single-size scattering model if sufficient analysis bandwidth was available. PMID:23007782

  14. Quantitative ultrasound estimates from populations of scatterers with continuous size distributions: effects of the size estimator algorithm.

    PubMed

    Lavarello, Roberto; Oelze, Michael

    2012-09-01

    Quantitative ultrasonic techniques using backscatter coefficients (BSCs) may fail to produce physically meaningful estimates of effective scatterer diameter (ESD) when the analysis media contains scatterers of different sizes. In this work, three different estimator algorithms were used to produce estimates of ESD. The performance of the three estimators was compared over different frequency bands using simulations and experiments with physical phantoms. All estimators produced ESD estimates by comparing the estimated BSCs with a scattering model based on the backscattering cross section of a single spherical fluid scatterer. The first estimator consisted of minimizing the average square deviation of the logarithmically compressed ratio between the estimated BSCs and the scattering model. The second and third estimators consisted of minimizing the mean square error between the estimated BSCs and a linear transformation of the scattering model with and without considering an intercept, respectively. Simulations were conducted over several analysis bandwidths between 1 and 40 MHz from populations of scatterers with either a uniform size distribution or a distribution based on the inverse cubic of the size. Diameters of the distributions ranged between [25, 100], [25, 50], [50, 100], and [50, 75] μm. Experimental results were obtained from two gelatin phantoms containing cross-linked dextran gel spheres ranging in diameter from 28 to 130 μm and 70 to 130 μm, respectively, and 5-, 7.5-, 10-, and 13-MHz focused transducers. Significant differences in the performances of the ESD estimator algorithms as a function of the analysis frequency were observed. Specifically, the third estimator exhibited potential to produce physically meaningful ESD estimates even for large ka values when using a single-size scattering model if sufficient analysis bandwidth was available.

  15. "Help Wanted, Inquire Within": Estimation. Activities and Thoughts That Emphasize Dealing Sensibly with Numbers through the Processes of Estimation. (Grades 1-6). Title I Elementary Mathematics Program.

    ERIC Educational Resources Information Center

    Gronert, Joie; Marshall, Sally

    Developed for elementary teachers, this activity unit is designed to teach students the importance of estimation in developing quantitative thinking. Nine ways in which estimation is useful to students are listed, and five general guidelines are offered to the teacher for planning estimation activities. Specific guidelines are provided for…

  16. "Help Wanted, Inquire Within": Estimation. Activities and Thoughts That Emphasize Dealing Sensibly with Numbers through the Processes of Estimation. (Grades 1-6). Title I Elementary Mathematics Program.

    ERIC Educational Resources Information Center

    Gronert, Joie; Marshall, Sally

    Developed for elementary teachers, this activity unit is designed to teach students the importance of estimation in developing quantitative thinking. Nine ways in which estimation is useful to students are listed, and five general guidelines are offered to the teacher for planning estimation activities. Specific guidelines are provided for…

  17. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  18. Designing a Quantitative Structure-Activity Relationship for the ...

    EPA Pesticide Factsheets

    Toxicokinetic models serve a vital role in risk assessment by bridging the gap between chemical exposure and potentially toxic endpoints. While intrinsic metabolic clearance rates have a strong impact on toxicokinetics, limited data is available for environmentally relevant chemicals including nearly 8000 chemicals tested for in vitro bioactivity in the Tox21 program. To address this gap, a quantitative structure-activity relationship (QSAR) for intrinsic metabolic clearance rate was developed to offer reliable in silico predictions for a diverse array of chemicals. Models were constructed with curated in vitro assay data for both pharmaceutical-like chemicals (ChEMBL database) and environmentally relevant chemicals (ToxCast screening) from human liver microsomes (2176 from ChEMBL) and human hepatocytes (757 from ChEMBL and 332 from ToxCast). Due to variability in the experimental data, a binned approach was utilized to classify metabolic rates. Machine learning algorithms, such as random forest and k-nearest neighbor, were coupled with open source molecular descriptors and fingerprints to provide reasonable estimates of intrinsic metabolic clearance rates. Applicability domains defined the optimal chemical space for predictions, which covered environmental chemicals well. A reduced set of informative descriptors (including relative charge and lipophilicity) and a mixed training set of pharmaceuticals and environmentally relevant chemicals provided the best intr

  19. Designing a Quantitative Structure-Activity Relationship for the ...

    EPA Pesticide Factsheets

    Toxicokinetic models serve a vital role in risk assessment by bridging the gap between chemical exposure and potentially toxic endpoints. While intrinsic metabolic clearance rates have a strong impact on toxicokinetics, limited data is available for environmentally relevant chemicals including nearly 8000 chemicals tested for in vitro bioactivity in the Tox21 program. To address this gap, a quantitative structure-activity relationship (QSAR) for intrinsic metabolic clearance rate was developed to offer reliable in silico predictions for a diverse array of chemicals. Models were constructed with curated in vitro assay data for both pharmaceutical-like chemicals (ChEMBL database) and environmentally relevant chemicals (ToxCast screening) from human liver microsomes (2176 from ChEMBL) and human hepatocytes (757 from ChEMBL and 332 from ToxCast). Due to variability in the experimental data, a binned approach was utilized to classify metabolic rates. Machine learning algorithms, such as random forest and k-nearest neighbor, were coupled with open source molecular descriptors and fingerprints to provide reasonable estimates of intrinsic metabolic clearance rates. Applicability domains defined the optimal chemical space for predictions, which covered environmental chemicals well. A reduced set of informative descriptors (including relative charge and lipophilicity) and a mixed training set of pharmaceuticals and environmentally relevant chemicals provided the best intr

  20. Quantitative Estimates of the Social Benefits of Learning, 1: Crime. Wider Benefits of Learning Research Report.

    ERIC Educational Resources Information Center

    Feinstein, Leon

    The cost benefits of lifelong learning in the United Kingdom were estimated, based on quantitative evidence. Between 1975-1996, 43 police force areas in England and Wales were studied to determine the effect of wages on crime. It was found that a 10 percent rise in the average pay of those on low pay reduces the overall area property crime rate by…

  1. A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY

    EPA Science Inventory

    We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...

  2. A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY

    EPA Science Inventory

    We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...

  3. On sweat analysis for quantitative estimation of dehydration during physical exercise.

    PubMed

    Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M

    2015-08-01

    Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.

  4. A method for estimating and removing streaking artifacts in quantitative susceptibility mapping.

    PubMed

    Li, Wei; Wang, Nian; Yu, Fang; Han, Hui; Cao, Wei; Romero, Rebecca; Tantiwongkosi, Bundhit; Duong, Timothy Q; Liu, Chunlei

    2015-03-01

    Quantitative susceptibility mapping (QSM) is a novel MRI method for quantifying tissue magnetic property. In the brain, it reflects the molecular composition and microstructure of the local tissue. However, susceptibility maps reconstructed from single-orientation data still suffer from streaking artifacts which obscure structural details and small lesions. We propose and have developed a general method for estimating streaking artifacts and subtracting them from susceptibility maps. Specifically, this method uses a sparse linear equation and least-squares (LSQR)-algorithm-based method to derive an initial estimation of magnetic susceptibility, a fast quantitative susceptibility mapping method to estimate the susceptibility boundaries, and an iterative approach to estimate the susceptibility artifact from ill-conditioned k-space regions only. With a fixed set of parameters for the initial susceptibility estimation and subsequent streaking artifact estimation and removal, the method provides an unbiased estimate of tissue susceptibility with negligible streaking artifacts, as compared to multi-orientation QSM reconstruction. This method allows for improved delineation of white matter lesions in patients with multiple sclerosis and small structures of the human brain with excellent anatomical details. The proposed methodology can be extended to other existing QSM algorithms. Copyright © 2014. Published by Elsevier Inc.

  5. A method for estimating and removing streaking artifacts in quantitative susceptibility mapping

    PubMed Central

    Li, Wei; Wang, Nian; Yu, Fang; Han, Hui; Cao, Wei; Romero, Rebecca; Tantiwongkosi, Bundhit; Duong, Timothy Q.; Liu, Chunlei

    2015-01-01

    Quantitative susceptibility mapping (QSM) is a novel MRI method for quantifying tissue magnetic property. In the brain, it reflects the molecular composition and microstructure of the local tissue. However, susceptibility maps reconstructed from single-orientation data still suffer from streaking artifacts which obscure structural details and small lesions. We propose and have developed a general method for estimating streaking artifacts and subtracting them from susceptibility maps. Specifically, this method uses a sparse linear equation and least-squares (LSQR)-algorithm-based method to derive an initial estimation of magnetic susceptibility, a fast quantitative susceptibility mapping method to estimate the susceptibility boundaries, and an iterative approach to estimate the susceptibility artifact from ill-conditioned k-space regions only. With a fixed set of parameters for the initial susceptibility estimation and subsequent streaking artifact estimation and removal, the method provides an unbiased estimate of tissue susceptibility with negligible streaking artifacts, as compared to multi-orientation QSM reconstruction. This method allows for improved delineation of white matter lesions in patients with multiple sclerosis and small structures of the human brain with excellent anatomical details. The proposed methodology can be extended to other existing QSM algorithms. PMID:25536496

  6. Software Size Estimation Using Activity Point

    NASA Astrophysics Data System (ADS)

    Densumite, S.; Muenchaisri, P.

    2017-03-01

    Software size is widely recognized as an important parameter for effort and cost estimation. Currently there are many methods for measuring software size including Source Line of Code (SLOC), Function Points (FP), Netherlands Software Metrics Users Association (NESMA), Common Software Measurement International Consortium (COSMIC), and Use Case Points (UCP). SLOC is physically counted after the software is developed. Other methods compute size from functional, technical, and/or environment aspects at early phase of software development. In this research, activity point approach is proposed to be another software size estimation method. Activity point is computed using activity diagram and adjusted with technical complexity factors (TCF), environment complexity factors (ECF), and people risk factors (PRF). An evaluation of the approach is present.

  7. Quantitative Risk reduction estimation Tool For Control Systems, Suggested Approach and Research Needs

    SciTech Connect

    Miles McQueen; Wayne Boyer; Mark Flynn; Sam Alessi

    2006-03-01

    For the past year we have applied a variety of risk assessment technologies to evaluate the risk to critical infrastructure from cyber attacks on control systems. More recently, we identified the need for a stand alone control system risk reduction estimation tool to provide owners and operators of control systems with a more useable, reliable, and credible method for managing the risks from cyber attack. Risk is defined as the probability of a successful attack times the value of the resulting loss, typically measured in lives and dollars. Qualitative and ad hoc techniques for measuring risk do not provide sufficient support for cost benefit analyses associated with cyber security mitigation actions. To address the need for better quantitative risk reduction models we surveyed previous quantitative risk assessment research; evaluated currently available tools; developed new quantitative techniques [17] [18]; implemented a prototype analysis tool to demonstrate how such a tool might be used; used the prototype to test a variety of underlying risk calculational engines (e.g. attack tree, attack graph); and identified technical and research needs. We concluded that significant gaps still exist and difficult research problems remain for quantitatively assessing the risk to control system components and networks, but that a useable quantitative risk reduction estimation tool is not beyond reach.

  8. Rapid Detection and Quantitative Estimation of Type A Botulinum Toxin by Electroimmunodiffusion

    PubMed Central

    Miller, Carol A.; Anderson, Arthur W.

    1971-01-01

    An experimental system is described for the detection and quantitative estimation of type A botulinum toxin by electroimmunodiffusion. The method is shown to be rapid, specific, and quantitative. As little as 14 mouse LD50 per 0.1 ml of type A toxin was detected within 2 hr. When applied to experimentally contaminated foods such as canned tuna, pumpkin, spinach, green beans, and sausage, the technique detected botulinum toxin rapidly and identified it as to type and quantity. A specific rabbit type A antitoxin was produced for this in vitro system since the equine antitoxin (Center for Disease Control) tested in this experiment was found to be unsuitable. Images PMID:5005291

  9. Estimating bioerosion rate on fossil corals: a quantitative approach from Oligocene reefs (NW Italy)

    NASA Astrophysics Data System (ADS)

    Silvestri, Giulia

    2010-05-01

    Bioerosion of coral reefs, especially when related to the activity of macroborers, is considered to be one of the major processes influencing framework development in present-day reefs. Macroboring communities affecting both living and dead corals are widely distributed also in the fossil record and their role is supposed to be analogously important in determining flourishing vs demise of coral bioconstructions. Nevertheless, many aspects concerning environmental factors controlling the incidence of bioerosion, shifting in composition of macroboring communities and estimation of bioerosion rate in different contexts are still poorly documented and understood. This study presents an attempt to quantify bioerosion rate on reef limestones characteristic of some Oligocene outcrops of the Tertiary Piedmont Basin (NW Italy) and deposited under terrigenous sedimentation within prodelta and delta fan systems. Branching coral rubble-dominated facies have been recognized as prevailing in this context. Depositional patterns, textures, and the generally low incidence of taphonomic features, such as fragmentation and abrasion, suggest relatively quiet waters where coral remains were deposited almost in situ. Thus taphonomic signatures occurring on corals can be reliably used to reconstruct environmental parameters affecting these particular branching coral assemblages during their life and to compare them with those typical of classical clear-water reefs. Bioerosion is sparsely distributed within coral facies and consists of a limited suite of traces, mostly referred to clionid sponges and polychaete and sipunculid worms. The incidence of boring bivalves seems to be generally lower. Together with semi-quantitative analysis of bioerosion rate along vertical logs and horizontal levels, two quantitative methods have been assessed and compared. These consist in the elaboration of high resolution scanned thin sections through software for image analysis (Photoshop CS3) and point

  10. A new TLC bioautographic assay for qualitative and quantitative estimation of lipase inhibitors.

    PubMed

    Tang, Jihe; Zhou, Jinge; Tang, Qingjiu; Wu, Tao; Cheng, Zhihong

    2016-01-01

    Lipase inhibitory assays based on TLC bioautography have made recent progress; however, an assay with greater substrate specificity and quantitative capabilities would advance the efficacy of this particular bioassay. To address these limitations, a new TLC bioautographic assay for detecting lipase inhibitors was developed and validated in this study. The new TLC bioautographic assay was based on reaction of lipase with β-naphthyl myristate and the subsequent formation of the purple dye between β-naphthol and Fast Blue B salt (FBB). The relative lipase inhibitory capacity (RLIC) was determined by a TLC densitometry with fluorescence detection, expressed as orlistat equivalents in millimoles on a per sample weight basis. Six pure compounds and three natural extracts were evaluated for their potential lipase inhibitory activities by this TLC bioautographic assay. The β-naphthyl myristate as the substrate improved the detection sensitivity and specificity significantly. The limit of detection (LOD) of this assay was 0.01 ng for orlistat, the current treatment for obesity. This assay has acceptable accuracy (92.07-105.39%), intra-day and inter-day precisions [relative standard deviation (RSD), 2.64-4.40%], as well as intra-plate and inter-plate precisions (RSD, 1.8-4.9%). The developed method is rapid, simple, stable, and specific for screening and estimation of the potential lipase inhibitors. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Data depth, data completeness, and their influence on quantitative genetic estimation in two contrasting bird populations.

    PubMed

    Quinn, J L; Charmantier, A; Garant, D; Sheldon, B C

    2006-05-01

    Evolutionary biologists increasingly use pedigree-based quantitative genetic methods to address questions about the evolutionary dynamics of traits in wild populations. In many cases, phenotypic data may have been collected only for recent parts of the study. How does this influence the performance of the models used to analyse these data? Here we explore how data depth (number of years) and completeness (number of observations) influence estimates of genetic variance and covariance within the context of an existing pedigree. Using long-term data from the great tit Parus major and the mute swan Cygnus olor, species with different life-histories, we examined the effect of manipulating the amount of data included on quantitative genetic parameter estimates. Manipulating data depth and completeness had little influence on estimated genetic variances, heritabilities, or genetic correlations, but (as expected) did influence confidence in these estimates. Estimated breeding values in the great tit were not influenced by data depth but were in the mute swan, probably because of differences in pedigree structure. Our analyses suggest the 'rule of thumb' that data from 3 years and a minimum of 100 individuals per year are needed to estimate genetic parameters with acceptable confidence, and that using pedigree data is worthwhile, even if phenotypes are only available toward the tips of the pedigree.

  12. Quantitative structure-activity relationships for fluoroelastomer/chlorofluorocarbon systems

    SciTech Connect

    Paciorek, K.J.L.; Masuda, S.R.; Nakahara, J.H. ); Snyder, C.E. Jr.; Warner, W.M. )

    1991-12-01

    This paper reports on swell, tensile, and modulus data that were determined for a fluoroelastomer after exposure to a series of chlorofluorocarbon model fluids. Quantitative structure-activity relationships (QSAR) were developed for the swell as a function of the number of carbons and chlorines and for tensile strength as a function of carbon number and chlorine positions in the chlorofluorocarbons.

  13. Estimating quantitative genetic parameters in wild populations: a comparison of pedigree and genomic approaches

    PubMed Central

    Bérénos, Camillo; Ellis, Philip A; Pilkington, Jill G; Pemberton, Josephine M

    2014-01-01

    The estimation of quantitative genetic parameters in wild populations is generally limited by the accuracy and completeness of the available pedigree information. Using relatedness at genomewide markers can potentially remove this limitation and lead to less biased and more precise estimates. We estimated heritability, maternal genetic effects and genetic correlations for body size traits in an unmanaged long-term study population of Soay sheep on St Kilda using three increasingly complete and accurate estimates of relatedness: (i) Pedigree 1, using observation-derived maternal links and microsatellite-derived paternal links; (ii) Pedigree 2, using SNP-derived assignment of both maternity and paternity; and (iii) whole-genome relatedness at 37 037 autosomal SNPs. In initial analyses, heritability estimates were strikingly similar for all three methods, while standard errors were systematically lower in analyses based on Pedigree 2 and genomic relatedness. Genetic correlations were generally strong, differed little between the three estimates of relatedness and the standard errors declined only very slightly with improved relatedness information. When partitioning maternal effects into separate genetic and environmental components, maternal genetic effects found in juvenile traits increased substantially across the three relatedness estimates. Heritability declined compared to parallel models where only a maternal environment effect was fitted, suggesting that maternal genetic effects are confounded with direct genetic effects and that more accurate estimates of relatedness were better able to separate maternal genetic effects from direct genetic effects. We found that the heritability captured by SNP markers asymptoted at about half the SNPs available, suggesting that denser marker panels are not necessarily required for precise and unbiased heritability estimates. Finally, we present guidelines for the use of genomic relatedness in future quantitative genetics

  14. Estimating quantitative genetic parameters in wild populations: a comparison of pedigree and genomic approaches.

    PubMed

    Bérénos, Camillo; Ellis, Philip A; Pilkington, Jill G; Pemberton, Josephine M

    2014-07-01

    The estimation of quantitative genetic parameters in wild populations is generally limited by the accuracy and completeness of the available pedigree information. Using relatedness at genomewide markers can potentially remove this limitation and lead to less biased and more precise estimates. We estimated heritability, maternal genetic effects and genetic correlations for body size traits in an unmanaged long-term study population of Soay sheep on St Kilda using three increasingly complete and accurate estimates of relatedness: (i) Pedigree 1, using observation-derived maternal links and microsatellite-derived paternal links; (ii) Pedigree 2, using SNP-derived assignment of both maternity and paternity; and (iii) whole-genome relatedness at 37 037 autosomal SNPs. In initial analyses, heritability estimates were strikingly similar for all three methods, while standard errors were systematically lower in analyses based on Pedigree 2 and genomic relatedness. Genetic correlations were generally strong, differed little between the three estimates of relatedness and the standard errors declined only very slightly with improved relatedness information. When partitioning maternal effects into separate genetic and environmental components, maternal genetic effects found in juvenile traits increased substantially across the three relatedness estimates. Heritability declined compared to parallel models where only a maternal environment effect was fitted, suggesting that maternal genetic effects are confounded with direct genetic effects and that more accurate estimates of relatedness were better able to separate maternal genetic effects from direct genetic effects. We found that the heritability captured by SNP markers asymptoted at about half the SNPs available, suggesting that denser marker panels are not necessarily required for precise and unbiased heritability estimates. Finally, we present guidelines for the use of genomic relatedness in future quantitative genetics

  15. Estimating the effect of SNP genotype on quantitative traits from pooled DNA samples

    PubMed Central

    2012-01-01

    Background Studies to detect associations between DNA markers and traits of interest in humans and livestock benefit from increasing the number of individuals genotyped. Performing association studies on pooled DNA samples can provide greater power for a given cost. For quantitative traits, the effect of an SNP is measured in the units of the trait and here we propose and demonstrate a method to estimate SNP effects on quantitative traits from pooled DNA data. Methods To obtain estimates of SNP effects from pooled DNA samples, we used logistic regression of estimated allele frequencies in pools on phenotype. The method was tested on a simulated dataset, and a beef cattle dataset using a model that included principal components from a genomic correlation matrix derived from the allele frequencies estimated from the pooled samples. The performance of the obtained estimates was evaluated by comparison with estimates obtained using regression of phenotype on genotype from individual samples of DNA. Results For the simulated data, the estimates of SNP effects from pooled DNA are similar but asymptotically different to those from individual DNA data. Error in estimating allele frequencies had a large effect on the accuracy of estimated SNP effects. For the beef cattle dataset, the principal components of the genomic correlation matrix from pooled DNA were consistent with known breed groups, and could be used to account for population stratification. Correctly modeling the contemporary group structure was essential to achieve estimates similar to those from individual DNA data, and pooling DNA from individuals within groups was superior to pooling DNA across groups. For a fixed number of assays, pooled DNA samples produced results that were more correlated with results from individual genotyping data than were results from one random individual assayed from each pool. Conclusions Use of logistic regression of allele frequency on phenotype makes it possible to estimate SNP

  16. Optimal Quantitative Estimates in Stochastic Homogenization for Elliptic Equations in Nondivergence Form

    NASA Astrophysics Data System (ADS)

    Armstrong, Scott; Lin, Jessica

    2017-08-01

    We prove quantitative estimates for the stochastic homogenization of linear uniformly elliptic equations in nondivergence form. Under strong independence assumptions on the coefficients, we obtain optimal estimates on the subquadratic growth of the correctors with stretched exponential-type bounds in probability. Like the theory of Gloria and Otto (Ann Probab 39(3):779-856, 2011; Ann Appl Probab 22(1):1-28, 2012) for divergence form equations, the arguments rely on nonlinear concentration inequalities combined with certain estimates on the Green's functions and derivative bounds on the correctors. We obtain these analytic estimates by developing a C 1,1 regularity theory down to microscopic scale, which is of independent interest and is inspired by the C 0,1 theory introduced in the divergence form case by the first author and Smart (Ann Sci Éc Norm Supér (4) 49(2):423-481, 2016).

  17. Quantitative Estimation of Trace Chemicals in Industrial Effluents with the Sticklet Transform Method

    SciTech Connect

    Mehta, N C; Scharlemann, E T; Stevens, C G

    2001-04-02

    Application of a novel transform operator, the Sticklet transform, to the quantitative estimation of trace chemicals in industrial effluent plumes is reported. The sticklet transform is a superset of the well-known derivative operator and the Haar wavelet, and is characterized by independently adjustable lobe width and separation. Computer simulations demonstrate that they can make accurate and robust concentration estimates of multiple chemical species in industrial effluent plumes in the presence of strong clutter background, interferent chemicals and random noise. In this paper they address the application of the sticklet transform in estimating chemical concentrations in effluent plumes in the presence of atmospheric transmission effects. They show that this transform retains the ability to yield accurate estimates using on-plume/off-plume measurements that represent atmospheric differentials up to 10% of the full atmospheric attenuation.

  18. Poisson Parameters of Antimicrobial Activity: A Quantitative Structure-Activity Approach

    PubMed Central

    Sestraş, Radu E.; Jäntschi, Lorentz; Bolboacă, Sorana D.

    2012-01-01

    A contingency of observed antimicrobial activities measured for several compounds vs. a series of bacteria was analyzed. A factor analysis revealed the existence of a certain probability distribution function of the antimicrobial activity. A quantitative structure-activity relationship analysis for the overall antimicrobial ability was conducted using the population statistics associated with identified probability distribution function. The antimicrobial activity proved to follow the Poisson distribution if just one factor varies (such as chemical compound or bacteria). The Poisson parameter estimating antimicrobial effect, giving both mean and variance of the antimicrobial activity, was used to develop structure-activity models describing the effect of compounds on bacteria and fungi species. Two approaches were employed to obtain the models, and for every approach, a model was selected, further investigated and found to be statistically significant. The best predictive model for antimicrobial effect on bacteria and fungi species was identified using graphical representation of observed vs. calculated values as well as several predictive power parameters. PMID:22606039

  19. Stroke onset time estimation from multispectral quantitative magnetic resonance imaging in a rat model of focal permanent cerebral ischemia.

    PubMed

    McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A

    2016-08-01

    Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.

  20. Quantitative observations of cavitation activity in a viscoelastic medium.

    PubMed

    Collin, Jamie R T; Coussios, Constantin C

    2011-11-01

    Quantitative experimental observations of single-bubble cavitation in viscoelastic media that would enable validation of existing models are presently lacking. In the present work, single bubble cavitation is induced in an agar gel using a 1.15 MHz high intensity focused ultrasound transducer, and observed using a focused single-element passive cavitation detection (PCD) transducer. To enable quantitative observations, a full receive calibration is carried out of a spherically focused PCD system by a bistatic scattering substitution technique that uses an embedded spherical scatterer and a hydrophone. Adjusting the simulated pressure received by the PCD by the transfer function on receive and the frequency-dependent attenuation of agar gel enables direct comparison of the measured acoustic emissions with those predicted by numerical modeling of single-bubble cavitation using a modified Keller-Miksis approach that accounts for viscoelasticity of the surrounding medium. At an incident peak rarefactional pressure near the cavitation threshold, period multiplying is observed in both experiment and numerical model. By comparing the two sets of results, an estimate of the equilibrium bubble radius in the experimental observations can be made, with potential for extension to material parameter estimation. Use of these estimates yields good agreement between model and experiment.

  1. Estimating activity energy expenditure: how valid are physical activity questionnaires?

    PubMed

    Neilson, Heather K; Robson, Paula J; Friedenreich, Christine M; Csizmadi, Ilona

    2008-02-01

    Activity energy expenditure (AEE) is the modifiable component of total energy expenditure (TEE) derived from all activities, both volitional and nonvolitional. Because AEE may affect health, there is interest in its estimation in free-living people. Physical activity questionnaires (PAQs) could be a feasible approach to AEE estimation in large populations, but it is unclear whether or not any PAQ is valid for this purpose. Our aim was to explore the validity of existing PAQs for estimating usual AEE in adults, using doubly labeled water (DLW) as a criterion measure. We reviewed 20 publications that described PAQ-to-DLW comparisons, summarized study design factors, and appraised criterion validity using mean differences (AEE(PAQ) - AEE(DLW), or TEE(PAQ) - TEE(DLW)), 95% limits of agreement, and correlation coefficients (AEE(PAQ) versus AEE(DLW) or TEE(PAQ) versus TEE(DLW)). Only 2 of 23 PAQs assessed most types of activity over the past year and indicated acceptable criterion validity, with mean differences (TEE(PAQ) - TEE(DLW)) of 10% and 2% and correlation coefficients of 0.62 and 0.63, respectively. At the group level, neither overreporting nor underreporting was more prevalent across studies. We speculate that, aside from reporting error, discrepancies between PAQ and DLW estimates may be partly attributable to 1) PAQs not including key activities related to AEE, 2) PAQs and DLW ascertaining different time periods, or 3) inaccurate assignment of metabolic equivalents to self-reported activities. Small sample sizes, use of correlation coefficients, and limited information on individual validity were problematic. Future research should address these issues to clarify the true validity of PAQs for estimating AEE.

  2. Quantitative estimation of the parameters for self-motion driven by difference in surface tension.

    PubMed

    Suematsu, Nobuhiko J; Sasaki, Tomohiro; Nakata, Satoshi; Kitahata, Hiroyuki

    2014-07-15

    Quantitative information on the parameters associated with self-propelled objects would enhance the potential of this research field; for example, finding a realistic way to develop a functional self-propelled object and quantitative understanding of the mechanism of self-motion. We therefore estimated five main parameters, including the driving force, of a camphor boat as a simple self-propelled object that spontaneously moves on water due to difference in surface tension. The experimental results and mathematical model indicated that the camphor boat generated a driving force of 4.2 μN, which corresponds to a difference in surface tension of 1.1 mN m(-1). The methods used in this study are not restricted to evaluate the parameters of self-motion of a camphor boat, but can be applied to other self-propelled objects driven by difference in surface tension. Thus, our investigation provides a novel method to quantitatively estimate the parameters for self-propelled objects driven by the interfacial tension difference.

  3. Estimation of undiscovered deposits in quantitative mineral resource assessments-examples from Venezuela and Puerto Rico

    USGS Publications Warehouse

    Cox, D.P.

    1993-01-01

    Quantitative mineral resource assessments used by the United States Geological Survey are based on deposit models. These assessments consist of three parts: (1) selecting appropriate deposit models and delineating on maps areas permissive for each type of deposit; (2) constructing a grade-tonnage model for each deposit model; and (3) estimating the number of undiscovered deposits of each type. In this article, I focus on the estimation of undiscovered deposits using two methods: the deposit density method and the target counting method. In the deposit density method, estimates are made by analogy with well-explored areas that are geologically similar to the study area and that contain a known density of deposits per unit area. The deposit density method is useful for regions where there is little or no data. This method was used to estimate undiscovered low-sulfide gold-quartz vein deposits in Venezuela. Estimates can also be made by counting targets such as mineral occurrences, geophysical or geochemical anomalies, or exploration "plays" and by assigning to each target a probability that it represents an undiscovered deposit that is a member of the grade-tonnage distribution. This method is useful in areas where detailed geological, geophysical, geochemical, and mineral occurrence data exist. Using this method, porphyry copper-gold deposits were estimated in Puerto Rico. ?? 1993 Oxford University Press.

  4. Quantitative Structure-Antifungal Activity Relationships for cinnamate derivatives.

    PubMed

    Saavedra, Laura M; Ruiz, Diego; Romanelli, Gustavo P; Duchowicz, Pablo R

    2015-12-01

    Quantitative Structure-Activity Relationships (QSAR) are established with the aim of analyzing the fungicidal activities of a set of 27 active cinnamate derivatives. The exploration of more than a thousand of constitutional, topological, geometrical and electronic molecular descriptors, which are calculated with Dragon software, leads to predictions of the growth inhibition on Pythium sp and Corticium rolfsii fungi species, in close agreement to the experimental values extracted from the literature. A set containing 21 new structurally related cinnamate compounds is prepared. The developed QSAR models are applied to predict the unknown fungicidal activity of this set, showing that cinnamates like 38, 28 and 42 are expected to be highly active for Pythium sp, while this is also predicted for 28 and 34 in C. rolfsii.

  5. Improvement and quantitative performance estimation of the back support muscle suit.

    PubMed

    Muramatsu, Y; Umehara, H; Kobayashi, H

    2013-01-01

    We have been developing the wearable muscle suit for direct and physical motion supports. The use of the McKibben artificial muscle has opened the way to the introduction of "muscle suits" compact, lightweight, reliable, wearable "assist-bots" enabling manual worker to lift and carry weights. Since back pain is the most serious problem for manual worker, improvement of the back support muscle suit under the feasibility study and quantitative estimation are shown in this paper. The structure of the upper body frame, the method to attach to the body, and the axes addition were explained as for the improvement. In the experiments, we investigated quantitative performance results and efficiency of the back support muscle suit in terms of vertical lifting of heavy weights by employing integral electromyography (IEMG). The results indicated that the values of IEMG were reduced by about 40% by using the muscle suit.

  6. The new approach of polarimetric attenuation correction for improving radar quantitative precipitation estimation(QPE)

    NASA Astrophysics Data System (ADS)

    Gu, Ji-Young; Suk, Mi-Kyung; Nam, Kyung-Yeub; Ko, Jeong-Seok; Ryzhkov, Alexander

    2016-04-01

    To obtain high-quality radar quantitative precipitation estimation data, reliable radar calibration and efficient attenuation correction are very important. Because microwave radiation at shorter wavelength experiences strong attenuation in precipitation, accounting for this attenuation is the essential work at shorter wavelength radar. In this study, the performance of different attenuation/differential attenuation correction schemes at C band is tested for two strong rain events which occurred in central Oklahoma. And also, a new attenuation correction scheme (combination of self-consistency and hot-spot concept methodology) that separates relative contributions of strong convective cells and the rest of the storm to the path-integrated total and differential attenuation is among the algorithms explored. A quantitative use of weather radar measurement such as rainfall estimation relies on the reliable attenuation correction. We examined the impact of attenuation correction on estimates of rainfall in heavy rain events by using cross-checking with S-band radar measurements which are much less affected by attenuation and compared the storm rain totals obtained from the corrected Z and KDP and rain gages in these cases. This new approach can be utilized at shorter wavelength radars efficiently. Therefore, it is very useful to Weather Radar Center of Korea Meteorological Administration preparing X-band research dual Pol radar network.

  7. Direct Estimation of Optical Parameters From Photoacoustic Time Series in Quantitative Photoacoustic Tomography.

    PubMed

    Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja

    2016-11-01

    Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.

  8. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study

    NASA Astrophysics Data System (ADS)

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-01

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  9. Multipoint linkage mapping using sibpairs: non-parametric estimation of trait effects with quantitative covariates.

    PubMed

    Chiou, Jeng-Min; Liang, Kung-Yee; Chiu, Yen-Feng

    2005-01-01

    Multipoint linkage analysis using sibpair designs remains a common approach to help investigators to narrow chromosomal regions for traits (either qualitative or quantitative) of interest. Despite its popularity, the success of this approach depends heavily on how issues such as genetic heterogeneity, gene-gene, and gene-environment interactions are properly handled. If addressed properly, the likelihood of detecting genetic linkage and of efficiently estimating the location of the trait locus would be enhanced, sometimes drastically. Previously, we have proposed an approach to deal with these issues by modeling the genetic effect of the target trait locus as a function of covariates pertained to the sibpairs. Here the genetic effect is simply the probability that a sibpair shares the same allele at the trait locus from their parents. Such modeling helps to divide the sibpairs into more homogeneous subgroups, which in turn helps to enhance the chance to detect linkage. One limitation of this approach is the need to categorize the covariates so that a small and fixed number of genetic effect parameters are introduced. In this report, we take advantage of the fact that nowadays multiple markers are readily available for genotyping simultaneously. This suggests that one could estimate the dependence of the generic effect on the covariates nonparametrically. We present an iterative procedure to estimate (1) the genetic effect nonparametrically and (2) the location of the trait locus through estimating functions developed by Liang et al. ([2001a] Hum Hered 51:67-76). We apply this new method to the linkage study of schizophrenia to illustrate how the onset ages of each sibpair may help to address the issue of genetic heterogeneity. This analysis sheds new light on the dependence of the trait effect on onset ages from affected sibpairs, an observation not revealed previously. In addition, we have carried out some simulation work, which suggests that this method provides

  10. Dansyl glutathione as a trapping agent for the quantitative estimation and identification of reactive metabolites.

    PubMed

    Gan, Jinping; Harper, Timothy W; Hsueh, Mei-Mann; Qu, Qinling; Humphreys, W Griffith

    2005-05-01

    A sensitive and quantitative method was developed for the estimation of reactive metabolite formation in vitro. The method utilizes reduced glutathione (GSH) labeled with a fluorescence tag as a trapping agent and fluorescent detection for quantitation. The derivatization of GSH was accomplished by reaction of oxidized glutathione (GSSG) with dansyl chloride to form dansylated GSSG. Subsequent reduction of the disulfide bond yielded dansylated GSH (dGSH). Test compounds were incubated with human liver microsomes in the presence of dGSH and NADPH, and the resulting mixtures were analyzed by HPLC coupled with a fluorescence detector and a mass spectrometer for the quantitation and mass determination of the resulting dGSH adducts. The comparative chemical reactivity of dGSH vs GSH was investigated by monitoring the reaction of each with 1-chloro-2,4-dinitrobenzene or R-(+)-pulegone after bioactivation. dGSH was found to be equivalent to GSH in chemical reactivity toward both thiol reactive molecules. dGSH did not serve as a cofactor for glutathione S-transferase (GST)-mediated conjugation of 3,4-dichloronitrobenzene in incubations with either human liver S9 fractions or a recombinant GST, GSTM1-1. Reference compounds were tested in this assay, including seven compounds that have been reported to form GSH adducts along with seven drugs that are among the most prescribed in the current U.S. market and have not been reported to form GSH adducts. dGSH adducts were detected and quantitated in incubations with all seven positive reference compounds; however, there were no dGSH adducts observed with any of the widely prescribed drugs. In comparison with existing methods, this method is sensitive, quantitative, cost effective, and easy to implement.

  11. [Quantitative estimation of the dynamics of adventive flora (by the example of the Tula region)].

    PubMed

    Khorun, L V; Zakharov, V G; Sokolov, D D

    2006-01-01

    The rate of enrichment of the Tula region flora with adventive species was quantitatively estimated taking into account the changes of their degree of naturalization during the last 200 years. Numerical score of degree of the naturalization for each species was used to compile the initial database: "0", species absent from the territory; "1", ephemerophyte; "2", colonophyte; "3", epecophyte; "4", argiophyte; "?", lack of data. Non-interpolated integral index of the dynamics of adventive flora NI(t) was calculated from this database. This index displays the sum of the degrees of naturalization of all the adventive species in the flora in some particular year. The interpolation of the initial database, aimed at minimizing the influence of random factors (e.g., gaps in observations or different activity of the researchers in different years), was performed by substituting the "?" symbol by a series of intermediate values based on studies of the data for adjacent territories. Interpolated integral indices I(t) were calculated from the interpolated database. These indices were then leveled out with Morlet wavelets, in order to distinguish random spikes (lasting less than 50 years) from the analyzed signal, and thus approximate the index dynamics to the objective trend that represents the dynamics of the flora and not the rate of activity of the researchers. The dynamics of the adventive flora of the Tula region revealed with this method shows the following facts: 1) average rate of the enrichment of the adventive flora with strange species has been constant for these 200 years and amounted to 15 species per decade; 2) average rate of naturalization was relatively low and constant, amounting to 5 species per decade; 3) fluctuations of the composition and naturalization degree of the Tula region adventive flora species were not shown to be dependant directly on the changes in the territory's economic development during the last two centuries; 4) no periodicity was

  12. Quantitative estimation of hemorrhage in chronic subdural hematoma using the /sup 51/Cr erythrocyte labeling method

    SciTech Connect

    Ito, H.; Yamamoto, S.; Saito, K.; Ikeda, K.; Hisada, K.

    1987-06-01

    Red cell survival studies using an infusion of chromium-51-labeled erythrocytes were performed to quantitatively estimate hemorrhage in the chronic subdural hematoma cavity of 50 patients. The amount of hemorrhage was determined during craniotomy. Between 6 and 24 hours after infusion of the labeled red cells, hemorrhage accounted for a mean of 6.7% of the hematoma content, indicating continuous or intermittent hemorrhage into the cavity. The clinical state of the patients and the density of the chronic subdural hematoma on computerized tomography scans were related to the amount of hemorrhage. Chronic subdural hematomas with a greater amount of hemorrhage frequently consisted of clots rather than fluid.

  13. Novel Sessile Drop Software for Quantitative Estimation of Slag Foaming in Carbon/Slag Interactions

    NASA Astrophysics Data System (ADS)

    Khanna, Rita; Rahman, Mahfuzur; Leow, Richard; Sahajwalla, Veena

    2007-08-01

    Novel video-processing software has been developed for the sessile drop technique for a rapid and quantitative estimation of slag foaming. The data processing was carried out in two stages: the first stage involved the initial transformation of digital video/audio signals into a format compatible with computing software, and the second stage involved the computation of slag droplet volume and area of contact in a chosen video frame. Experimental results are presented on slag foaming from synthetic graphite/slag system at 1550 °C. This technique can be used for determining the extent and stability of foam as a function of time.

  14. High throughput, quantitative analysis of human osteoclast differentiation and activity.

    PubMed

    Diepenhorst, Natalie A; Nowell, Cameron J; Rueda, Patricia; Henriksen, Kim; Pierce, Tracie; Cook, Anna E; Pastoureau, Philippe; Sabatini, Massimo; Charman, William N; Christopoulos, Arthur; Summers, Roger J; Sexton, Patrick M; Langmead, Christopher J

    2017-02-15

    Osteoclasts are multinuclear cells that degrade bone under both physiological and pathophysiological conditions. Osteoclasts are therefore a major target of osteoporosis therapeutics aimed at preserving bone. Consequently, analytical methods for osteoclast activity are useful for the development of novel biomarkers and/or pharmacological agents for the treatment of osteoporosis. The nucleation state of an osteoclast is indicative of its maturation and activity. To date, activity is routinely measured at the population level with only approximate consideration of the nucleation state (an 'osteoclast population' is typically defined as cells with ≥3 nuclei). Using a fluorescent substrate for tartrate-resistant acid phosphatase (TRAP), a routinely used marker of osteoclast activity, we developed a multi-labelled imaging method for quantitative measurement of osteoclast TRAP activity at the single cell level. Automated image analysis enables interrogation of large osteoclast populations in a high throughput manner using open source software. Using this methodology, we investigated the effects of receptor activator of nuclear factor kappa-B ligand (RANK-L) on osteoclast maturation and activity and demonstrated that TRAP activity directly correlates with osteoclast maturity (i.e. nuclei number). This method can be applied to high throughput screening of osteoclast-targeting compounds to determine changes in maturation and activity.

  15. Quantitative Cyber Risk Reduction Estimation Methodology for a Small Scada Control System

    SciTech Connect

    Miles A. McQueen; Wayne F. Boyer; Mark A. Flynn; George A. Beitel

    2006-01-01

    We propose a new methodology for obtaining a quick quantitative measurement of the risk reduction achieved when a control system is modified with the intent to improve cyber security defense against external attackers. The proposed methodology employs a directed graph called a compromise graph, where the nodes represent stages of a potential attack and the edges represent the expected time-to-compromise for differing attacker skill levels. Time-to-compromise is modeled as a function of known vulnerabilities and attacker skill level. The methodology was used to calculate risk reduction estimates for a specific SCADA system and for a specific set of control system security remedial actions. Despite an 86% reduction in the total number of vulnerabilities, the estimated time-to-compromise was increased only by about 3 to 30% depending on target and attacker skill level.

  16. Target identification with quantitative activity based protein profiling (ABPP).

    PubMed

    Chen, Xiao; Wong, Yin Kwan; Wang, Jigang; Zhang, Jianbin; Lee, Yew-Mun; Shen, Han-Ming; Lin, Qingsong; Hua, Zi-Chun

    2017-02-01

    As many small bioactive molecules fulfill their functions through interacting with protein targets, the identification of such targets is crucial in understanding their mechanisms of action (MOA) and side effects. With technological advancements in target identification, it has become possible to accurately and comprehensively study the MOA and side effects of small molecules. While small molecules with therapeutic potential were derived solely from nature in the past, the remodeling and synthesis of such molecules have now been made possible. Presently, while some small molecules have seen successful application as drugs, the majority remain undeveloped, requiring further understanding of their MOA and side effects to fully tap into their potential. Given the typical promiscuity of many small molecules and the complexity of the cellular proteome, a high-flux and high-accuracy method is necessary. While affinity chromatography approaches combined with MS have had successes in target identification, limitations associated with nonspecific results remain. To overcome these complications, quantitative chemical proteomics approaches have been developed including metabolic labeling, chemical labeling, and label-free methods. These new approaches are adopted in conjunction with activity-based protein profiling (ABPP), allowing for a rapid process and accurate results. This review will briefly introduce the principles involved in ABPP, then summarize current advances in quantitative chemical proteomics approaches as well as illustrate with examples how ABPP coupled with quantitative chemical proteomics has been used to detect the targets of drugs and other bioactive small molecules including natural products.

  17. Partitioning and lipophilicity in quantitative structure-activity relationships.

    PubMed Central

    Dearden, J C

    1985-01-01

    The history of the relationship of biological activity to partition coefficient and related properties is briefly reviewed. The dominance of partition coefficient in quantitation of structure-activity relationships is emphasized, although the importance of other factors is also demonstrated. Various mathematical models of in vivo transport and binding are discussed; most of these involve partitioning as the primary mechanism of transport. The models describe observed quantitative structure-activity relationships (QSARs) well on the whole, confirming that partitioning is of key importance in in vivo behavior of a xenobiotic. The partition coefficient is shown to correlate with numerous other parameters representing bulk, such as molecular weight, volume and surface area, parachor and calculated indices such as molecular connectivity; this is especially so for apolar molecules, because for polar molecules lipophilicity factors into both bulk and polar or hydrogen bonding components. The relationship of partition coefficient to chromatographic parameters is discussed, and it is shown that such parameters, which are often readily obtainable experimentally, can successfully supplant partition coefficient in QSARs. The relationship of aqueous solubility with partition coefficient is examined in detail. Correlations are observed, even with solid compounds, and these can be used to predict solubility. The additive/constitutive nature of partition coefficient is discussed extensively, as are the available schemes for the calculation of partition coefficient. Finally the use of partition coefficient to provide structural information is considered. It is shown that partition coefficient can be a valuable structural tool, especially if the enthalpy and entropy of partitioning are available. PMID:3905374

  18. [Quantitative estimation source of urban atmospheric CO2 by carbon isotope composition].

    PubMed

    Liu, Wei; Wei, Nan-Nan; Wang, Guang-Hua; Yao, Jian; Zeng, You-Shi; Fan, Xue-Bo; Geng, Yan-Hong; Li, Yan

    2012-04-01

    To effectively reduce urban carbon emissions and verify the effectiveness of currently project for urban carbon emission reduction, quantitative estimation sources of urban atmospheric CO2 correctly is necessary. Since little fractionation of carbon isotope exists in the transportation from pollution sources to the receptor, the carbon isotope composition can be used for source apportionment. In the present study, a method was established to quantitatively estimate the source of urban atmospheric CO2 by the carbon isotope composition. Both diurnal and height variations of concentrations of CO2 derived from biomass, vehicle exhaust and coal burning were further determined for atmospheric CO2 in Jiading district of Shanghai. Biomass-derived CO2 accounts for the largest portion of atmospheric CO2. The concentrations of CO2 derived from the coal burning are larger in the night-time (00:00, 04:00 and 20:00) than in the daytime (08:00, 12:00 and 16:00), and increase with the increase of height. Those derived from the vehicle exhaust decrease with the height increase. The diurnal and height variations of sources reflect the emission and transport characteristics of atmospheric CO2 in Jiading district of Shanghai.

  19. The quantitative estimation of the vulnerability of brick and concrete wall impacted by an experimental boulder

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Guo, Z. X.; Wang, D.; Qian, H.

    2016-02-01

    There is little historic data about the vulnerability of damaged elements due to debris flow events in China. Therefore, it is difficult to quantitatively estimate the vulnerable elements suffered by debris flows. This paper is devoted to the research of the vulnerability of brick and concrete walls impacted by debris flows. An experimental boulder (an iron sphere) was applied to be the substitute of debris flow since it can produce similar shape impulse load on elements as debris flow. Several walls made of brick and concrete were constructed in prototype dimensions to physically simulate the damaged structures in debris flows. The maximum impact force was measured, and the damage conditions of the elements (including cracks and displacements) were collected, described and compared. The failure criterion of brick and concrete wall was proposed with reference to the structure characteristics as well as the damage pattern caused by debris flows. The quantitative estimation of the vulnerability of brick and concrete wall was finally established based on fuzzy mathematics and the proposed failure criterion. Momentum, maximum impact force and maximum impact bending moment were compared to be the best candidate for disaster intensity index. The results show that the maximum impact bending moment seems to be most suitable for the disaster intensity index in establishing vulnerability curve and formula.

  20. Sleep Period Time Estimation Based on Electrodermal Activity.

    PubMed

    Hwang, Su Hwan; Seo, Sangwon; Yoon, Hee Nam; Jung, Da Woon; Baek, Hyun Jae; Cho, Jaegeol; Choi, Jae Won; Lee, Yu Jin; Jeong, Do-Un; Park, Kwang Suk

    2017-01-01

    We proposed and tested a method to estimate sleep period time (SPT) using electrodermal activity (EDA) signals. Eight healthy subjects and six obstructive sleep apnea patients participated in the experiments. Each subject's EDA signals were measured at the middle and ring fingers of the dominant hand during polysomnography (PSG). For nine of the 17 participants, wrist actigraphy was also measured for a quantitative comparison of EDA- and actigraphy-based methods. Based on the training data, we observed that sleep onset was accompanied by a gradual reduction of amplitude of the EDA signals, whereas sleep offset was accompanied by a rapid increase in amplitude of EDA signals. We developed a method based on these EDA fluctuations during sleep-wake transitions, and applied it to a test dataset. The performance of the method was assessed by comparing its results with those from a physician's sleep stage scores. The mean absolute errors in the obtained values for sleep onset, offset, and period time between the proposed method, and the results of the PSG were 4.1, 3.0, and 6.1 min, respectively. Furthermore, there were no significant differences in the corresponding values between the methods. We compared these results with those obtained by applying actigraphic methods, and found that our algorithm outperformed these in terms of each estimated parameter of interest in SPT estimation. Long awakening periods were also detected based on sympathetic responses reflected in the EDA signals. The proposed method can be applied to a daily sleep monitoring system.

  1. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    PubMed Central

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  2. Quantitative genetic activity graphical profiles for use in chemical evaluation

    SciTech Connect

    Waters, M.D.; Stack, H.F.; Garrett, N.E.; Jackson, M.A.

    1990-12-31

    A graphic approach, terms a Genetic Activity Profile (GAP), was developed to display a matrix of data on the genetic and related effects of selected chemical agents. The profiles provide a visual overview of the quantitative (doses) and qualitative (test results) data for each chemical. Either the lowest effective dose or highest ineffective dose is recorded for each agent and bioassay. Up to 200 different test systems are represented across the GAP. Bioassay systems are organized according to the phylogeny of the test organisms and the end points of genetic activity. The methodology for producing and evaluating genetic activity profile was developed in collaboration with the International Agency for Research on Cancer (IARC). Data on individual chemicals were compiles by IARC and by the US Environmental Protection Agency (EPA). Data are available on 343 compounds selected from volumes 1-53 of the IARC Monographs and on 115 compounds identified as Superfund Priority Substances. Software to display the GAPs on an IBM-compatible personal computer is available from the authors. Structurally similar compounds frequently display qualitatively and quantitatively similar profiles of genetic activity. Through examination of the patterns of GAPs of pairs and groups of chemicals, it is possible to make more informed decisions regarding the selection of test batteries to be used in evaluation of chemical analogs. GAPs provided useful data for development of weight-of-evidence hazard ranking schemes. Also, some knowledge of the potential genetic activity of complex environmental mixtures may be gained from an assessment of the genetic activity profiles of component chemicals. The fundamental techniques and computer programs devised for the GAP database may be used to develop similar databases in other disciplines. 36 refs., 2 figs.

  3. The consistency of quantitative genetic estimates in field and laboratory in the yellow dung fly.

    PubMed

    Blanckenhorn, Wolf U

    2002-03-01

    How consistent quantitative genetic estimates are across environments is unclear and under discussion. Heritability (h2) estimates of hind tibia length (body size), development time and diapause induction in the yellow dung fly, Scathophaga stercoraria, generated with various methods in various environments are reported and compared. Estimates varied considerably within and among studies, but yielded good overall averages. The genetic correlations between the sexes for body size and development time were expectedly high (r(sex) = 0.57-0.78) but clearly less than unity, implying independent evolution of both traits in males and females of this sexually dimorphic species. Genetic and environmental variance components increased in proportion at variable field relative to constant laboratory conditions, resulting in overall similar h(2). Heritabilities for males and females were also similar, and h(2) of the morphological trait hind tibia length was not necessarily greater than that of the two life history traits. Full-sib (broad-sense) estimates (h(2) = 0.7-1.1) were 2-3 times greater than half-sib and parent/offspring (narrow-sense) estimates (h2 = 0-0.6). Common environment (i.e., among-container) variance averaged 38.3% (body size) and 16.8% (development time) of the broad-sense genetic variance in two laboratory studies. The broad-sense h(2), therefore, may contain substantial amounts (12-50%) of dominance variance and/or variance due to maternal effects. A general conclusion emerging from this and similar studies appears to be that whether field and laboratory genetic estimates differ depends on the environment, trait and species under consideration.

  4. Visual estimation versus different quantitative coronary angiography methods to assess lesion severity in bifurcation lesions.

    PubMed

    Grundeken, Maik J; Collet, Carlos; Ishibashi, Yuki; Généreux, Philippe; Muramatsu, Takashi; LaSalle, Laura; Kaplan, Aaron V; Wykrzykowska, Joanna J; Morel, Marie-Angèle; Tijssen, Jan G; de Winter, Robbert J; Onuma, Yoshinobu; Leon, Martin B; Serruys, Patrick W

    2017-08-24

    To compare visual estimation with different quantitative coronary angiography (QCA) methods (single-vessel versus bifurcation software) to assess coronary bifurcation lesions. QCA has been developed to overcome the limitations of visual estimation. Conventional QCA however, developed in "straight vessels," has proved to be inaccurate in bifurcation lesions. Therefore, bifurcation QCA was developed. However, the impact of these different modalities on bifurcation lesion severity classification is yet unknown METHODS: From a randomized controlled trial investigating a novel bifurcation stent (Clinicaltrials.gov NCT01258972), patients with baseline assessment of lesion severity by means of visual estimation, single-vessel QCA, 2D bifurcation QCA and 3D bifurcation QCA were included. We included 113 bifurcations lesions in which all 5 modalities were assessed. The primary end-point was to evaluate how the different modalities affected the classification of bifurcation lesion severity and extent of disease. On visual estimation, 100% of lesions had side-branch diameter stenosis (%DS) >50%, whereas in 83% with single-vessel QCA, 27% with 2D bifurcation QCA and 26% with 3D bifurcation QCA a side-branch %DS >50% was found (P < 0.0001). With regard to the percentage of "true" bifurcation lesions, there was a significant difference between visual estimate (100%), single-vessel QCA (75%) and bifurcation QCA (17% with 2D bifurcation software and 13% with 3D bifurcation software, P < 0.0001). Our study showed that bifurcation lesion complexity was significantly affected when more advanced bifurcation QCA software were used. "True" bifurcation lesion rate was 100% on visual estimation, but as low as 13% when analyzed with dedicated bifurcation QCA software. © 2017 Wiley Periodicals, Inc.

  5. Estimating effects of a single gene and polygenes on quantitative traits from a diallel design.

    PubMed

    Lou, Xiang-Yang; Yang, Mark C K

    2006-01-01

    A genetic model is developed with additive and dominance effects of a single gene and polygenes as well as general and specific reciprocal effects for the progeny from a diallel mating design. The methods of ANOVA, minimum norm quadratic unbiased estimation (MINQUE), restricted maximum likelihood estimation (REML), and maximum likelihood estimation (ML) are suggested for estimating variance components, and the methods of generalized least squares (GLS) and ordinary least squares (OLS) for fixed effects, while best linear unbiased prediction, linear unbiased prediction (LUP), and adjusted unbiased prediction are suggested for analyzing random effects. Monte Carlo simulations were conducted to evaluate the unbiasedness and efficiency of statistical methods involving two diallel designs with commonly used sample sizes, 6 and 8 parents, with no and missing crosses, respectively. Simulation results show that GLS and OLS are almost equally efficient for estimation of fixed effects, while MINQUE (1) and REML are better estimators of the variance components and LUP is most practical method for prediction of random effects. Data from a Drosophila melanogaster experiment (Gilbert 1985a, Theor appl Genet 69:625-629) were used as a working example to demonstrate the statistical analysis. The new methodology is also applicable to screening candidate gene(s) and to other mating designs with multiple parents, such as nested (NC Design I) and factorial (NC Design II) designs. Moreover, this methodology can serve as a guide to develop new methods for detecting indiscernible major genes and mapping quantitative trait loci based on mixture distribution theory. The computer program for the methods suggested in this article is freely available from the authors.

  6. Estimation of Exercise Intensity in “Exercise and Physical Activity Reference for Health Promotion”

    NASA Astrophysics Data System (ADS)

    Ohkubo, Tomoyuki; Kurihara, Yosuke; Kobayashi, Kazuyuki; Watanabe, Kajiro

    To maintain or promote the health condition of elderly citizens is quite important for Japan. Given the circumstances, the Ministry of Health, Labour and Welfare has established the standards for the activities and exercises for promoting the health, and quantitatively determined the exercise intensity on 107 items of activities. This exercise intensity, however, requires recording the type and the duration of the activity to be calculated. In this paper, the exercise intensities are estimated using 3D accelerometer for 25 daily activities. As the result, the exercise intensities were estimated to be within the root mean square error of 0.83 METs for all 25 activities.

  7. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography

    PubMed Central

    Zamir, Ehud; Kong, George Y.X.; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-01-01

    Purpose We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Methods Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance (“E”) and (2) lateral photographic temporal limbus to cornea distance (“Z”). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. Results A strong linear correlation was found between EZR and ACD, R = −0.91, R2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was −0.013 mm (range −0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Conclusions Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. Translational Relevance EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations. PMID:27540496

  8. Application of short-wave infrared (SWIR) spectroscopy in quantitative estimation of clay mineral contents

    NASA Astrophysics Data System (ADS)

    You, Jinfeng; Xing, Lixin; Liang, Liheng; Pan, Jun; Meng, Tao

    2014-03-01

    Clay minerals are significant constituents of soil which are necessary for life. This paper studied three types of clay minerals, kaolinite, illite, and montmorillonite, for they are not only the most common soil forming materials, but also important indicators of soil expansion and shrinkage potential. These clay minerals showed diagnostic absorption bands resulting from vibrations of hydroxyl groups and structural water molecules in the SWIR wavelength region. The short-wave infrared reflectance spectra of the soil was obtained from a Portable Near Infrared Spectrometer (PNIS, spectrum range: 1300~2500 nm, interval: 2 nm). Due to the simplicity, quickness, and the non-destructiveness analysis, SWIR spectroscopy has been widely used in geological prospecting, chemical engineering and many other fields. The aim of this study was to use multiple linear regression (MLR) and partial least squares (PLS) regression to establish the optimizing quantitative estimation models of the kaolinite, illite and montmorillonite contents from soil reflectance spectra. Here, the soil reflectance spectra mainly refers to the spectral reflectivity of soil (SRS) corresponding to the absorption-band position (AP) of kaolinite, illite, and montmorillonite representative spectra from USGS spectral library, the SRS corresponding to the AP of soil spectral and soil overall spectrum reflectance values. The optimal estimation models of three kinds of clay mineral contents showed that the retrieval accuracy was satisfactory (Kaolinite content: a Root Mean Square Error of Calibration (RMSEC) of 1.671 with a coefficient of determination (R2) of 0.791; Illite content: a RMSEC of 1.126 with a R2 of 0.616; Montmorillonite content: a RMSEC of 1.814 with a R2 of 0.707). Thus, the reflectance spectra of soil obtained form PNIS could be used for quantitative estimation of kaolinite, illite and montmorillonite contents in soil.

  9. Quantitative and Dynamic Imaging of ATM Kinase Activity.

    PubMed

    Nyati, Shyam; Young, Grant; Ross, Brian Dale; Rehemtulla, Alnawaz

    2017-01-01

    Ataxia telangiectasia mutated (ATM) is a serine/threonine kinase critical to the cellular DNA-damage response, including DNA double-strand breaks (DSBs). ATM activation results in the initiation of a complex cascade of events facilitating DNA damage repair, cell cycle checkpoint control, and survival. Traditionally, protein kinases have been analyzed in vitro using biochemical methods (kinase assays using purified proteins or immunological assays) requiring a large number of cells and cell lysis. Genetically encoded biosensors based on optical molecular imaging such as fluorescence or bioluminescence have been developed to enable interrogation of kinase activities in live cells with a high signal to background. We have genetically engineered a hybrid protein whose bioluminescent activity is dependent on the ATM-mediated phosphorylation of a substrate. The engineered protein consists of the split luciferase-based protein complementation pair with a CHK2 (a substrate for ATM kinase activity) target sequence and a phospho-serine/threonine-binding domain, FHA2, derived from yeast Rad53. Phosphorylation of the serine residue within the target sequence by ATM would lead to its interaction with the phospho-serine-binding domain, thereby preventing complementation of the split luciferase pair and loss of reporter activity. Bioluminescence imaging of reporter expressing cells in cultured plates or as mouse xenografts provides a quantitative surrogate for ATM kinase activity and therefore the cellular DNA damage response in a noninvasive, dynamic fashion.

  10. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    SciTech Connect

    Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory; Wirtzfeld, Lauren; Wright, Frances C.

    2014-01-15

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  11. An Ensemble Generator for Quantitative Precipitation Estimation Based on Censored Shifted Gamma Distributions

    NASA Astrophysics Data System (ADS)

    Wright, D.; Kirschbaum, D.; Yatheendradas, S.

    2016-12-01

    The considerable uncertainties associated with quantitative precipitation estimates (QPE), whether from satellite platforms, ground-based weather radar, or numerical weather models, suggest that such QPE should be expressed as distributions or ensembles of possible values, rather than as single values. In this research, we borrow a framework from the weather forecast verification community, to "correct" satellite precipitation and generate ensemble QPE. This approach is based on the censored shifted gamma distribution (CSGD). The probability of precipitation, central tendency (i.e. mean), and the uncertainty can be captured by the three parameters of the CSGD. The CSGD can then be applied for simulation of rainfall ensembles using a flexible nonlinear regression framework, whereby the CSGD parameters can be conditioned on one or more reference rainfall datasets and on other time-varying covariates such as modeled or measured estimates of precipitable water and relative humidity. We present the framework and initial results by generating precipitation ensembles based on the Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis (TMPA) dataset, using both NLDAS and PERSIANN-CDR precipitation datasets as references. We also incorporate a number of covariates from MERRA2 reanalysis including model-estimated precipitation, precipitable water, relative humidity, and lifting condensation level. We explore the prospects for applying the framework and other ensemble error models globally, including in regions where high-quality "ground truth" rainfall estimates are lacking. We compare the ensemble outputs against those of an independent rain gage-based ensemble rainfall dataset. "Pooling" of regional rainfall observations is explored as one option for improving ensemble estimates of rainfall extremes. The approach has potential applications in near-realtime, retrospective, and scenario modeling of rainfall-driven hazards such as floods and landslides

  12. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    PubMed

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  13. Spectral Feature Analysis for Quantitative Estimation of Cyanobacteria Chlorophyll-A

    NASA Astrophysics Data System (ADS)

    Lin, Yi; Ye, Zhanglin; Zhang, Yugan; Yu, Jie

    2016-06-01

    In recent years, lake eutrophication caused a large of Cyanobacteria bloom which not only brought serious ecological disaster but also restricted the sustainable development of regional economy in our country. Chlorophyll-a is a very important environmental factor to monitor water quality, especially for lake eutrophication. Remote sensed technique has been widely utilized in estimating the concentration of chlorophyll-a by different kind of vegetation indices and monitoring its distribution in lakes, rivers or along coastline. For each vegetation index, its quantitative estimation accuracy for different satellite data might change since there might be a discrepancy of spectral resolution and channel center between different satellites. The purpose this paper is to analyze the spectral feature of chlorophyll-a with hyperspectral data (totally 651 bands) and use the result to choose the optimal band combination for different satellites. The analysis method developed here in this study could be useful to recognize and monitor cyanobacteria bloom automatically and accrately. In our experiment, the reflectance (from 350nm to 1000nm) of wild cyanobacteria in different consistency (from 0 to 1362.11ug/L) and the corresponding chlorophyll-a concentration were measured simultaneously. Two kinds of hyperspectral vegetation indices were applied in this study: simple ratio (SR) and narrow band normalized difference vegetation index (NDVI), both of which consists of any two bands in the entire 651 narrow bands. Then multivariate statistical analysis was used to construct the linear, power and exponential models. After analyzing the correlation between chlorophyll-a and single band reflectance, SR, NDVI respetively, the optimal spectral index for quantitative estimation of cyanobacteria chlorophyll-a, as well corresponding central wavelength and band width were extracted. Results show that: Under the condition of water disturbance, SR and NDVI are both suitable for quantitative

  14. Estimation of humoral activity of Eleutherococcus senticosus.

    PubMed

    Drozd, Janina; Sawicka, Teresa; Prosińska, Joanna

    2002-01-01

    The aim of the present work was an estimation of the influence of two plant pharmaceutical preparations containing an extract from the root of Eleutherococcus senticosus: Argoeleuter tablets and Immuplant tablets, on the humoral response of immunological system. Experiments were performed with female Balb/c mice six weeks old. In order to reveal the influence of taking preparations, containing an extract from Eleutherococcus senticosus on some elements of the immunological system, three ways of their administration have been compared: before illness, during illness and a combination of both. The obtained results allow formulating the following conclusions: - the pharmaceutical preparations, containing the extract from Eleutherococcus senticosus administered orally, influence on the increase of the level of immunoglobulins comprised in the mice's blood serum, - the pharmaceutical preparations act with different power, not fully dependent on the content of marker of the active substance - eleutheroside E, - dosage of the preparations containing the extract from Eleutherococcus senticosus should not be established basing only on the extract content, - best curative results, measured as the stimulation of humoral response of the organism were obtained when a given preparation was administered therapeutically, even though the combined administration - prophylactically with prolonged administration during illness also is correct.

  15. New service interface for River Forecasting Center derived quantitative precipitation estimates

    USGS Publications Warehouse

    Blodgett, David L.

    2013-01-01

    For more than a decade, the National Weather Service (NWS) River Forecast Centers (RFCs) have been estimating spatially distributed rainfall by applying quality-control procedures to radar-indicated rainfall estimates in the eastern United States and other best practices in the western United States to producea national Quantitative Precipitation Estimate (QPE) (National Weather Service, 2013). The availability of archives of QPE information for analytical purposes has been limited to manual requests for access to raw binary file formats that are difficult for scientists who are not in the climatic sciences to work with. The NWS provided the QPE archives to the U.S. Geological Survey (USGS), and the contents of the real-time feed from the RFCs are being saved by the USGS for incorporation into the archives. The USGS has applied time-series aggregation and added latitude-longitude coordinate variables to publish the RFC QPE data. Web services provide users with direct (index-based) data access, rendered visualizations of the data, and resampled raster representations of the source data in common geographic information formats.

  16. A test for Improvement of high resolution Quantitative Precipitation Estimation for localized heavy precipitation events

    NASA Astrophysics Data System (ADS)

    Lee, Jung-Hoon; Roh, Joon-Woo; Park, Jeong-Gyun

    2017-04-01

    Accurate estimation of precipitation is one of the most difficult and significant tasks in the area of weather diagnostic and forecasting. In the Korean Peninsula, heavy precipitations are caused by various physical mechanisms, which are affected by shortwave trough, quasi-stationary moisture convergence zone among varying air masses, and a direct/indirect effect of tropical cyclone. In addition to, various geographical and topographical elements make production of temporal and spatial distribution of precipitation is very complicated. Especially, localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances. In weather radar data with high temporal and spatial resolution, accurate estimation of rain rate from radar reflectivity data is too difficult. Z-R relationship (Marshal and Palmer 1948) have adapted representatively. In addition to, several methods such as support vector machine (SVM), neural network, Fuzzy logic, Kriging were utilized in order to improve the accuracy of rain rate. These methods show the different quantitative precipitation estimation (QPE) and the performances of accuracy are different for heavy precipitation cases. In this study, in order to improve the accuracy of QPE for localized heavy precipitation, ensemble method for Z-R relationship and various techniques was tested. This QPE ensemble method was developed by a concept based on utilizing each advantage of precipitation calibration methods. The ensemble members were produced for a combination of different Z-R coefficient and calibration method.

  17. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding

    NASA Astrophysics Data System (ADS)

    Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.

    2009-12-01

    The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.

  18. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations

  19. Quantitative estimation of surface ocean productivity and bottom water oxygen concentration using benthic foraminifera

    NASA Astrophysics Data System (ADS)

    Loubere, Paul

    1994-10-01

    An electronic supplement of this material may be obtained on adiskette or Anonymous FTP from KOSMOS.AGU.ORG. (LOGIN toAGU's FTP account using ANONYMOUS as the usemame andGUEST as the password. Go to the right directory by typing CDAPEND. Type LS to see what files are available. Type GET and thename of the file to get it. Finally, type EXIT to leave the system.)(Paper 94PA01624, Quantitative estimation of surface oceanproductivity and bottom water concentration using benthicforaminifera, by P. Loubere). Diskette may be ordered from AmericanGeophysical Union, 2000 Florida Avenue, N.W., Washington, DC20009; $15.00. Payment must accompany order.Quantitative estimation of surface ocean productivity and bottom water oxygen concentration with benthic foraminifera was attempted using 70 samples from equatorial and North Pacific surface sediments. These samples come from a well defined depth range in the ocean, between 2200 and 3200 m, so that depth related factors do not interfere with the estimation. Samples were selected so that foraminifera were well preserved in the sediments and temperature and salinity were nearly uniform (T = 1.5° C; S = 34.6‰). The sample set was also assembled so as to minimize the correlation often seen between surface ocean productivity and bottom water oxygen values (r² = 0.23 for prediction purposes in this case). This procedure reduced the chances of spurious results due to correlations between the environmental variables. The samples encompass a range of productivities from about 25 to >300 gC m-2 yr-1, and a bottom water oxygen range from 1.8 to 3.5 ml/L. Benthic foraminiferal assemblages were quantified using the >62 µm fraction of the sediments and 46 taxon categories. MANOVA multivariate regression was used to project the faunal matrix onto the two environmental dimensions using published values for productivity and bottom water oxygen to calibrate this operation. The success of this regression was measured with the multivariate r

  20. Quantitative analysis of axonal fiber activation evoked by deep brain stimulation via activation density heat maps

    PubMed Central

    Hartmann, Christian J.; Chaturvedi, Ashutosh; Lujan, J. Luis

    2015-01-01

    Background: Cortical modulation is likely to be involved in the various therapeutic effects of deep brain stimulation (DBS). However, it is currently difficult to predict the changes of cortical modulation during clinical adjustment of DBS. Therefore, we present a novel quantitative approach to estimate anatomical regions of DBS-evoked cortical modulation. Methods: Four different models of the subthalamic nucleus (STN) DBS were created to represent variable electrode placements (model I: dorsal border of the posterolateral STN; model II: central posterolateral STN; model III: central anteromedial STN; model IV: dorsal border of the anteromedial STN). Axonal fibers of passage near each electrode location were reconstructed using probabilistic tractography and modeled using multi-compartment cable models. Stimulation-evoked activation of local axon fibers and corresponding cortical projections were modeled and quantified. Results: Stimulation at the border of the STN (models I and IV) led to a higher degree of fiber activation and associated cortical modulation than stimulation deeply inside the STN (models II and III). A posterolateral target (models I and II) was highly connected to cortical areas representing motor function. Additionally, model I was also associated with strong activation of fibers projecting to the cerebellum. Finally, models III and IV showed a dorsoventral difference of preferentially targeted prefrontal areas (models III: middle frontal gyrus; model IV: inferior frontal gyrus). Discussion: The method described herein allows characterization of cortical modulation across different electrode placements and stimulation parameters. Furthermore, knowledge of anatomical distribution of stimulation-evoked activation targeting cortical regions may help predict efficacy and potential side effects, and therefore can be used to improve the therapeutic effectiveness of individual adjustments in DBS patients. PMID:25713510

  1. Quantitative structure-activity relationship studies on nitrofuranyl antitubercular agents

    PubMed Central

    Hevener, Kirk E.; Ball, David M.; Buolamwini, John K.

    2008-01-01

    A series of nitrofuranylamide and related aromatic compounds displaying potent activity against M. tuberculosis has been investigated utilizing 3-Dimensional Quantitative Structure-Activity Relationship (3D-QSAR) techniques. Comparative Molecular Field Analysis (CoMFA) and Comparative Molecular Similarity Indices Analysis (CoMSIA) methods were used to produce 3D-QSAR models that correlated the Minimum Inhibitory Concentration (MIC) values against M. tuberculosis with the molecular structures of the active compounds. A training set of 95 active compounds was used to develop the models, which were then evaluated by a series of internal and external cross-validation techniques. A test set of 15 compounds was used for the external validation. Different alignment and ionization rules were investigated as well as the effect of global molecular descriptors including lipophilicity (cLogP, LogD), Polar Surface Area (PSA), and steric bulk (CMR), on model predictivity. Models with greater than 70% predictive ability, as determined by external validation, and high internal validity (cross validated r2 > .5) have been developed. Incorporation of lipophilicity descriptors into the models had negligible effects on model predictivity. The models developed will be used to predict the activity of proposed new structures and advance the development of next generation nitrofuranyl and related nitroaromatic anti-tuberculosis agents. PMID:18701298

  2. Quantitative ultraviolet skin exposure in children during selected outdoor activities.

    PubMed

    Melville, S K; Rosenthal, F S; Luckmann, R; Lew, R A

    1991-06-01

    We determined the cumulative exposure of 3 body sites to ultraviolet radiation from sunlight for 126 children observed from 1-3 d during a variety of common recreational activities at a girl scout camp, baseball camp and community baseball field. Median arm exposure to children playing baseball at a camp ranged from 27.6% to 33.2% of the possible ambient exposure. These exposures are similar to adult exposures reported for comparable activities. Median exposure to the arm at the girl scout camp during mixed activities ranged from 9.0% to 26.5% of possible ambient exposure. At the girl scout camp, exposure both within and between activity groups varied substantially and were more variable than the baseball players' exposure. Arm exposure was greater than cheek and forehead exposure for all subject groups, with an arm-to-cheek exposure ratio ranging from 1.7 to 2.3. For organized sports, such as baseball, it may be possible to assign a single exposure estimate for use in epidemiologic studies or risk estimates. However, for less uniform outdoor activities, wide variability in exposure makes it more difficult to predict an individual's exposure.

  3. Quantitative Structure-Activity Relationships for Organophosphate Enzyme Inhibition (Briefing Charts)

    DTIC Science & Technology

    2011-09-22

    1 AFRL-RH-WP-TR-2012-0089 Quantitative Structure- Activity Relationships for Organophosphate Enzyme Inhibition (Briefing Charts...REPORT TYPE Interim 3. DATES COVERED (From - To) September 10 – September 12 4. TITLE AND SUBTITLE Quantitative Structure- Activity Relationships for...difficult to quickly obtain. To address this concern, quantitative structure- activity relationship (QSAR) models were developed to predict

  4. Centennial increase in geomagnetic activity: Latitudinal differences and global estimates

    NASA Astrophysics Data System (ADS)

    Mursula, K.; Martini, D.

    2006-08-01

    We study here the centennial change in geomagnetic activity using the newly proposed Inter-Hour Variability (IHV) index. We correct the earlier estimates of the centennial increase by taking into account the effect of the change of the sampling of the magnetic field from one sample per hour to hourly means in the first years of the previous century. Since the IHV index is a variability index, the larger variability in the case of hourly sampling leads, without due correction, to excessively large values in the beginning of the century and an underestimated centennial increase. We discuss two ways to extract the necessary sampling calibration factors and show that they agree very well with each other. The effect of calibration is especially large at the midlatitude Cheltenham/Fredricksburg (CLH/FRD) station where the centennial increase changes from only 6% to 24% caused by calibration. Sampling calibration also leads to a larger centennial increase of global geomagnetic activity based on the IHV index. The results verify a significant centennial increase in global geomagnetic activity, in a qualitative agreement with the aa index, although a quantitative comparison is not warranted. We also find that the centennial increase has a rather strong and curious latitudinal dependence. It is largest at high latitudes. Quite unexpectedly, it is larger at low latitudes than at midlatitudes. These new findings indicate interesting long-term changes in near-Earth space. We also discuss possible internal and external causes for these observed differences. The centennial change of geomagnetic activity may be partly affected by changes in external conditions, partly by the secular decrease of the Earth's magnetic moment whose effect in near-Earth space may be larger than estimated so far.

  5. Immunohistochemical quantitation of oestrogen receptors and proliferative activity in oestrogen receptor positive breast cancer.

    PubMed Central

    Jensen, V; Ladekarl, M

    1995-01-01

    AIM--To evaluate the effect of the duration of formalin fixation and of tumour heterogeneity on quantitative estimates of oestrogen receptor content (oestrogen receptor index) and proliferative activity (MIB-1 index) in breast cancer. METHODS--Two monoclonal antibodies, MIB-1 and oestrogen receptor, were applied to formalin fixed, paraffin wax embedded tissue from 25 prospectively collected oestrogen receptor positive breast carcinomas, using a microwave antigen retrieval method. Tumour tissue was allocated systematically to different periods of fixation to ensure minimal intraspecimen variation. The percentages of MIB-1 positive and oestrogen receptor positive nuclei were estimated in fields of vision sampled systematically from the entire specimen and from the whole tumour area of one "representative" cross-section. RESULTS--No correlation was found between the oestrogen receptor and MIB-1 indices and the duration of formalin fixation. The estimated MIB-1 and oestrogen receptor indices in tissue sampled systematically from the entire tumour were closely correlated with estimates obtained in a "representative" section. The intra- and interobserver correlation of the MIB-1 index was good, although a slight systematical error at the second assessment of the intraobserver study was noted. CONCLUSION--Quantitative estimates of oestrogen receptor content and proliferative activity are not significantly influenced by the period of fixation in formalin, varying from less than four hours to more than 48 hours. The MIB-1 and the oestrogen receptor indices obtained in a "representative" section do not deviate significantly from average indices determined in tissue samples from the entire tumour. Finally, the estimation of MIB-1 index is reproducible, justifying its routine use. PMID:7629289

  6. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  7. Toward quantitative forecasts of volcanic ash dispersal: Using satellite retrievals for optimal estimation of source terms

    NASA Astrophysics Data System (ADS)

    Zidikheri, Meelis J.; Lucas, Christopher; Potts, Rodney J.

    2017-08-01

    Airborne volcanic ash is a hazard to aviation. There is an increasing demand for quantitative forecasts of ash properties such as ash mass load to allow airline operators to better manage the risks of flying through airspace likely to be contaminated by ash. In this paper we show how satellite-derived mass load information at times prior to the issuance of the latest forecast can be used to estimate various model parameters that are not easily obtained by other means such as the distribution of mass of the ash column at the volcano. This in turn leads to better forecasts of ash mass load. We demonstrate the efficacy of this approach using several case studies.

  8. Estimation of the patient monitor alarm rate for a quantitative analysis of new alarm settings.

    PubMed

    de Waele, Stijn; Nielsen, Larry; Frassica, Joseph

    2014-01-01

    In many critical care units, default patient monitor alarm settings are not fine-tuned to the vital signs of the patient population. As a consequence there are many alarms. A large fraction of the alarms are not clinically actionable, thus contributing to alarm fatigue. Recent attention to this phenomenon has resulted in attempts in many institutions to decrease the overall alarm load of clinicians by altering the trigger thresholds for monitored parameters. Typically, new alarm settings are defined based on clinical knowledge and patient population norms and tried empirically on new patients without quantitative knowledge about the potential impact of these new settings. We introduce alarm regeneration as a method to estimate the alarm rate of new alarm settings using recorded patient monitor data. This method enables evaluation of several alarm setting scenarios prior to using these settings in the clinical setting. An expression for the alarm rate variance is derived for the calculation of statistical confidence intervals on the results.

  9. Method for quantitative estimation of position perception using a joystick during linear movement.

    PubMed

    Wada, Y; Tanaka, M; Mori, S; Chen, Y; Sumigama, S; Naito, H; Maeda, M; Yamamoto, M; Watanabe, S; Kajitani, N

    1996-12-01

    We designed a method for quantitatively estimating self-motion perceptions during passive body movement on a sled. The subjects were instructed to tilt a joystick in proportion to perceived displacement from a giving starting position during linear movement with varying displacements of 4 m, 10 m and 16 m induced by constant acceleration of 0.02 g, 0.05 g and 0.08 g along the antero-posterior axis. With this method, we could monitor not only subjective position perceptions but also response latencies for the beginning (RLbgn) and end (RLend) of the linear movement. Perceived body position fitted Stevens' power law, where R=kSn (R is output of the joystick, k is a constant, S is the displacement from the linear movement and n is an exponent). RLbgn decreased as linear acceleration increased. We conclude that this method is useful in analyzing the features and sensitivities of self-motion perceptions during movement.

  10. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    PubMed

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  11. Quantitative cytochemical measurement of glyceraldehyde 3-phosphate dehydrogenase activity.

    PubMed

    Henderson, B

    1976-08-25

    A system has been developed for the quantitative measurment of glyceraldehyde 3-phosphate dehydrogenase activity in tissue sections. An obstacle to the histochemical study of this enzyme has been the fact that the substrate, gylceraldehyde 3-phosphate, is very unstable. In the present system a stable compound, fructose 1, 6-diphosphate, is used as the primary substrate and the demonsatration of the glyceraldehyde 3-phosphate dehydrogenase activity depends on the conversion of this compound into the specific substrate by the aldolase present in the tissue. The characteristics of the dehydrogenase activity resulting from the addition of fructose 1, 6-diphosphate, resemble closely the known properties of purified glyceraldehyde 3-phosphate dehydrogenase. Use of polyvinyl alcohol in the reaction medium prevents release of enzymes from the sections, as occurs in aqueous media. Although in this study intrinsic aldolase activity was found to be adequate for the rapid conversion of fructose 1, 6-diphosphate into the specific substrate for the dehydrogenase, the use of exogenous aldolase may be of particular advantage in assessing the intergrity of the Embden-Meyerhof pathway.

  12. Quantitative Structure-Activity Relationship Modeling of Kinase Selectivity Profiles.

    PubMed

    Kothiwale, Sandeepkumar; Borza, Corina; Pozzi, Ambra; Meiler, Jens

    2017-09-19

    The discovery of selective inhibitors of biological target proteins is the primary goal of many drug discovery campaigns. However, this goal has proven elusive, especially for inhibitors targeting the well-conserved orthosteric adenosine triphosphate (ATP) binding pocket of kinase enzymes. The human kinome is large and it is rather difficult to profile early lead compounds against around 500 targets to gain an upfront knowledge on selectivity. Further, selectivity can change drastically during derivatization of an initial lead compound. Here, we have introduced a computational model to support the profiling of compounds early in the drug discovery pipeline. On the basis of the extensive profiled activity of 70 kinase inhibitors against 379 kinases, including 81 tyrosine kinases, we developed a quantitative structure-activity relation (QSAR) model using artificial neural networks, to predict the activity of these kinase inhibitors against the panel of 379 kinases. The model's performance in predicting activity ranges from 0.6 to 0.8 depending on the kinase, from the area under the curve (AUC) of the receiver operating characteristics (ROC). The profiler is available online at http://www.meilerlab.org/index.php/servers/show?s_id=23.

  13. Estimation of multipath transmission parameters for quantitative ultrasound measurements of bone.

    PubMed

    Dencks, Stefanie; Schmitz, Georg

    2013-09-01

    When applying quantitative ultrasound (QUS) measurements to bone for predicting osteoporotic fracture risk, the multipath transmission of sound waves frequently occurs. In the last 10 years, the interest in separating multipath QUS signals for their analysis awoke, and led to the introduction of several approaches. Here, we compare the performances of the two fastest algorithms proposed for QUS measurements of bone: the modified least-squares Prony method (MLSP), and the space alternating generalized expectation maximization algorithm (SAGE) applied in the frequency domain. In both approaches, the parameters of the transfer functions of the sound propagation paths are estimated. To provide an objective measure, we also analytically derive the Cramér-Rao lower bound of variances for any estimator and arbitrary transmit signals. In comparison with results of Monte Carlo simulations, this measure is used to evaluate both approaches regarding their accuracy and precision. Additionally, with simulations using typical QUS measurement settings, we illustrate the limitations of separating two superimposed waves for varying parameters with focus on their temporal separation. It is shown that for good SNRs around 100 dB, MLSP yields better results when two waves are very close. Additionally, the parameters of the smaller wave are more reliably estimated. If the SNR decreases, the parameter estimation with MLSP becomes biased and inefficient. Then, the robustness to noise of the SAGE clearly prevails. Because a clear influence of the interrelation between the wavelength of the ultrasound signals and their temporal separation is observable on the results, these findings can be transferred to QUS measurements at other sites. The choice of the suitable algorithm thus depends on the measurement conditions.

  14. Robust quantitative parameter estimation by advanced CMP measurements for vadose zone hydrological studies

    NASA Astrophysics Data System (ADS)

    Koyama, C.; Wang, H.; Khuut, T.; Kawai, T.; Sato, M.

    2015-12-01

    Soil moisture plays a crucial role in the understanding of processes in the vadose zone hydrology. In the last two decades ground penetrating radar (GPR) has been widely discussed has nondestructive measurement technique for soil moisture data. Especially the common mid-point (CMP) technique, which has been used in both seismic and GPR surveys to investigate the vertical velocity profiles, has a very high potential for quantitaive obervsations from the root zone to the ground water aquifer. However, the use is still rather limited today and algorithms for robust quantitative paramter estimation are lacking. In this study we develop an advanced processing scheme for operational soil moisture reetrieval at various depth. Using improved signal processing, together with a semblance - non-normalized cross-correlation sum combined stacking approach and the Dix formula, the interval velocities for multiple soil layers are obtained from the RMS velocities allowing for more accurate estimation of the permittivity at the reflecting point. Where the presence of a water saturated layer, like a groundwater aquifer, can be easily identified by its RMS velocity due to the high contrast compared to the unsaturated zone. By using a new semi-automated measurement technique the acquisition time for a full CMP gather with 1 cm intervals along a 10 m profile can be reduced significantly to under 2 minutes. The method is tested and validated under laboratory conditions in a sand-pit as well as on agricultural fields and beach sand in the Sendai city area. Comparison between CMP estimates and TDR measurements yield a very good agreement with RMSE of 1.5 Vol.-%. The accuracy of depth estimation is validated with errors smaller than 2%. Finally, we demonstrate application of the method in a test site in semi-arid Mongolia, namely the Orkhon River catchment in Bulgan, using commercial 100 MHz and 500 MHz RAMAC GPR antennas. The results demonstrate the suitability of the proposed method for

  15. Fatalities in high altitude mountaineering: a review of quantitative risk estimates.

    PubMed

    Weinbruch, Stephan; Nordby, Karl-Christian

    2013-12-01

    Quantitative estimates for mortality in high altitude mountaineering are reviewed. Special emphasis is placed on the heterogeneity of the risk estimates and on confounding. Crude estimates for mortality are on the order of 1/1000 to 40/1000 persons above base camp, for both expedition members and high altitude porters. High altitude porters have mostly a lower risk than expedition members (risk ratio for all Nepalese peaks requiring an expedition permit: 0.73; 95 % confidence interval 0.59-0.89). The summit bid is generally the most dangerous part of an expedition for members, whereas most high altitude porters die during route preparation. On 8000 m peaks, the mortality during descent from summit varies between 4/1000 and 134/1000 summiteers (members plus porters). The risk estimates are confounded by human and environmental factors. Information on confounding by gender and age is contradictory and requires further work. There are indications for safety segregation of men and women, with women being more risk averse than men. Citizenship appears to be a significant confounder. Prior high altitude mountaineering experience in Nepal has no protective effect. Commercial expeditions in the Nepalese Himalayas have a lower mortality than traditional expeditions, though after controlling for confounding, the difference is not statistically significant. The overall mortality is increasing with increasing peak altitude for expedition members but not for high altitude porters. In the Nepalese Himalayas and in Alaska, a significant decrease of mortality with calendar year was observed. A few suggestions for further work are made at the end of the article.

  16. Comparison of quantitative structure-activity relationship model performances on carboquinone derivatives.

    PubMed

    Bolboacă, Sorana-Daniela; Jäntschi, Lorentz

    2009-10-14

    Quantitative structure-activity relationship (qSAR) models are used to understand how the structure and activity of chemical compounds relate. In the present study, 37 carboquinone derivatives were evaluated and two different qSAR models were developed using members of the Molecular Descriptors Family (MDF) and the Molecular Descriptors Family on Vertices (MDFV). The usual parameters of regression models and the following estimators were defined and calculated in order to analyze the validity and to compare the models: Akaike's information criteria (three parameters), Schwarz (or Bayesian) information criterion, Amemiya prediction criterion, Hannan-Quinn criterion, Kubinyi function, Steiger's Z test, and Akaike's weights. The MDF and MDFV models proved to have the same estimation ability of the goodness-of-fit according to Steiger's Z test. The MDFV model proved to be the best model for the considered carboquinone derivatives according to the defined information and prediction criteria, Kubinyi function, and Akaike's weights.

  17. Quantitative structure activity relationship studies of mushroom tyrosinase inhibitors

    NASA Astrophysics Data System (ADS)

    Xue, Chao-Bin; Luo, Wan-Chun; Ding, Qi; Liu, Shou-Zhu; Gao, Xing-Xiang

    2008-05-01

    Here, we report our results from quantitative structure-activity relationship studies on tyrosinase inhibitors. Interactions between benzoic acid derivatives and tyrosinase active sites were also studied using a molecular docking method. These studies indicated that one possible mechanism for the interaction between benzoic acid derivatives and the tyrosinase active site is the formation of a hydrogen-bond between the hydroxyl (aOH) and carbonyl oxygen atoms of Tyr98, which stabilized the position of Tyr98 and prevented Tyr98 from participating in the interaction between tyrosinase and ORF378. Tyrosinase, also known as phenoloxidase, is a key enzyme in animals, plants and insects that is responsible for catalyzing the hydroxylation of tyrosine into o-diphenols and the oxidation of o-diphenols into o-quinones. In the present study, the bioactivities of 48 derivatives of benzaldehyde, benzoic acid, and cinnamic acid compounds were used to construct three-dimensional quantitative structure-activity relationship (3D-QSAR) models using comparative molecular field (CoMFA) and comparative molecular similarity indices (CoMSIA) analyses. After superimposition using common substructure-based alignments, robust and predictive 3D-QSAR models were obtained from CoMFA ( q 2 = 0.855, r 2 = 0.978) and CoMSIA ( q 2 = 0.841, r 2 = 0.946), with 6 optimum components. Chemical descriptors, including electronic (Hammett σ), hydrophobic (π), and steric (MR) parameters, hydrogen bond acceptor (H-acc), and indicator variable ( I), were used to construct a 2D-QSAR model. The results of this QSAR indicated that π, MR, and H-acc account for 34.9, 31.6, and 26.7% of the calculated biological variance, respectively. The molecular interactions between ligand and target were studied using a flexible docking method (FlexX). The best scored candidates were docked flexibly, and the interaction between the benzoic acid derivatives and the tyrosinase active site was elucidated in detail. We believe

  18. Modeling the nucleophilic reactivity of small organochlorine electrophiles: A mechanistically based quantitative structure-activity relationship

    SciTech Connect

    Verhaar, H.J.M.; Seinen, W.; Hermens, J.L.M.; Rorije, E.; Borkent, H.

    1996-06-01

    Environmental pollutants can be divided into four broad categories, narcosis-type chemicals, less inert (polar narcosis) chemicals, reactive chemicals, and specifically acting chemicals. For narcosis-type, or baseline, chemicals and for less inert chemicals, adequate quantitative structure-activity relationships (QSARs) are available for estimation of toxicity to aquatic species. This is not the case for reactive chemicals and specifically acting chemicals. A possible approach to develop aquatic toxicity QSARs for reactive chemicals based on simple considerations regarding their reactivity is given. It is shown that quantum chemical calculations on reaction transition states can be used to quantitatively predict the reactivity of sets of reactive chemicals. These predictions can then be used to develop aquatic toxicity QSARs.

  19. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model

  20. Quantitative assessment of Mycoplasma hemadsorption activity by flow cytometry.

    PubMed

    García-Morales, Luis; González-González, Luis; Costa, Manuela; Querol, Enrique; Piñol, Jaume

    2014-01-01

    A number of adherent mycoplasmas have developed highly complex polar structures that are involved in diverse aspects of the biology of these microorganisms and play a key role as virulence factors by promoting adhesion to host cells in the first stages of infection. Attachment activity of mycoplasma cells has been traditionally investigated by determining their hemadsorption ability to red blood cells and it is a distinctive trait widely examined when characterizing the different mycoplasma species. Despite the fact that protocols to qualitatively determine the hemadsorption or hemagglutination of mycoplasmas are straightforward, current methods when investigating hemadsorption at the quantitative level are expensive and poorly reproducible. By using flow cytometry, we have developed a procedure to quantify rapidly and accurately the hemadsorption activity of mycoplasmas in the presence of SYBR Green I, a vital fluorochrome that stains nucleic acids, allowing to resolve erythrocyte and mycoplasma cells by their different size and fluorescence. This method is very reproducible and permits the kinetic analysis of the obtained data and a precise hemadsorption quantification based on standard binding parameters such as the dissociation constant K d. The procedure we developed could be easily implemented in a standardized assay to test the hemadsorption activity of the growing number of clinical isolates and mutant strains of different mycoplasma species, providing valuable data about the virulence of these microorganisms.

  1. Application of quantitative structure-property relationship analysis to estimate the vapor pressure of pesticides.

    PubMed

    Goodarzi, Mohammad; Coelho, Leandro dos Santos; Honarparvar, Bahareh; Ortiz, Erlinda V; Duchowicz, Pablo R

    2016-06-01

    The application of molecular descriptors in describing Quantitative Structure Property Relationships (QSPR) for the estimation of vapor pressure (VP) of pesticides is of ongoing interest. In this study, QSPR models were developed using multiple linear regression (MLR) methods to predict the vapor pressure values of 162 pesticides. Several feature selection methods, namely the replacement method (RM), genetic algorithms (GA), stepwise regression (SR) and forward selection (FS), were used to select the most relevant molecular descriptors from a pool of variables. The optimum subset of molecular descriptors was used to build a QSPR model to estimate the vapor pressures of the selected pesticides. The Replacement Method improved the predictive ability of vapor pressures and was more reliable for the feature selection of these selected pesticides. The results provided satisfactory MLR models that had a satisfactory predictive ability, and will be important for predicting vapor pressure values for compounds with unknown values. This study may open new opportunities for designing and developing new pesticide. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    PubMed

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time.

  3. Quantitative estimation of transmembrane ion transport in rat renal collecting duct principal cells.

    PubMed

    Ilyaskin, Alexander V; Karpov, Denis I; Medvedev, Dmitriy A; Ershov, Alexander P; Baturina, Galina S; Katkova, Liubov E; Solenov, Evgeniy I

    2014-01-01

    Kidney collecting duct principal cells play a key role in regulated tubular reabsorption of water and sodium and secretion of potassium. The importance of this function for the maintenance of the osmotic homeostasis of the whole organism motivates extensive study of the ion transport properties of collecting duct principal cells. We performed experimental measurements of cell volume and intracellular sodium concentration in rat renal collecting duct principal cells from the outer medulla (OMCD) and used a mathematical model describing transmembrane ion fluxes to analyze the experimental data. The sodium and chloride concentrations ([Na+]in = 37.3 ± 3.3 mM, [Cl-]in = 32.2 ± 4.0 mM) in OMCD cells were quantitatively estimated. Correspondence between the experimentally measured cell physiological characteristics and the values of model permeability parameters was established. Plasma membrane permeabilities and the rates of transmembrane fluxes for sodium, potassium and chloride ions were estimated on the basis of ion substitution experiments and model predictions. In particular, calculated sodium (PNa), potassium (PK) and chloride (PCl) permeabilities were equal to 3.2 × 10-6 cm/s, 1.0 × 10-5 cm/s and 3.0 × 10-6 cm/s, respectively. This approach sets grounds for utilization of experimental measurements of intracellular sodium concentration and cell volume to quantify the ion permeabilities of OMCD principal cells and aids us in understanding the physiology of the adjustment of renal sodium and potassium excretion.

  4. Quantitative estimation of 21st-century urban greenspace changes in Chinese populous cities.

    PubMed

    Chen, Bin; Nie, Zhen; Chen, Ziyue; Xu, Bing

    2017-12-31

    Understanding the spatiotemporal changes of urban greenspace is a critical requirement for supporting urban planning and maintaining the function of urbanities. Although plenty of previous studies have attempted to estimate urban greenspace changes in China, there still remain shortcomings such as inconsistent surveying procedures and insufficient spatial resolution and city samples. Using cloud-free Landsat image composites in circa years 2000 and 2014, and Defense Meteorological Program Satellite Program's Operational Line-scan System (DMSP/OLS) nighttime lights dataset, we quantitatively estimated the urban greenspace changes regarding both administrative divisions and urban core boundaries across 98 Chinese populous cities. Results showed that a consistent decline of urban greenspace coverage was identified at both old and new urban areas in the majority of analyzed cities (i.e., 81.63% of cities regarding the administrative boundaries, and 86.73% of cities regarding the urban core boundaries). Partial correlation analysis also revealed that total urban greenspace area shrank as a linear function of the core urban expansion (R(2)=0.28, P<0.001), and a significant correlation was confirmed between population change and urban greenspace change across those Chinese populous cities included in this study (R(2)=0.11, P<0.001). Copyright © 2017. Published by Elsevier B.V.

  5. The quantitative precipitation estimation system for Dallas-Fort Worth (DFW) urban remote sensing network

    NASA Astrophysics Data System (ADS)

    Chen, Haonan; Chandrasekar, V.

    2015-12-01

    The Dallas-Fort Worth (DFW) urban radar network consists of a combination of high resolution X band radars and a standard National Weather Service (NWS) Next-Generation Radar (NEXRAD) system operating at S band frequency. High spatiotemporal-resolution quantitative precipitation estimation (QPE) is one of the important applications of such a network. This paper presents a real-time QPE system developed by the Collaborative Adaptive Sensing of the Atmosphere (CASA) Engineering Research Center for the DFW urban region using both the high resolution X band radar network and the NWS S band radar observations. The specific dual-polarization radar rainfall algorithms at different frequencies (i.e., S- and X-band) and the fusion methodology combining observations at different temporal resolution are described. Radar and rain gauge observations from four rainfall events in 2013 that are characterized by different meteorological phenomena are used to compare the rainfall estimation products of the CASA DFW QPE system to conventional radar products from the national radar network provided by NWS. This high-resolution QPE system is used for urban flash flood mitigations when coupled with hydrological models.

  6. Lake Number, a quantitative indicator of mixing used to estimate changes in dissolved oxygen

    USGS Publications Warehouse

    Robertson, Dale M.; Imberger, Jorg

    1994-01-01

    Lake Number, LN, values are shown to be quantitative indicators of deep mixing in lakes and reservoirs that can be used to estimate changes in deep water dissolved oxygen (DO) concentrations. LN is a dimensionless parameter defined as the ratio of the moments about the center of volume of the water body, of the stabilizing force of gravity associated with density stratification to the destabilizing forces supplied by wind, cooling, inflow, outflow, and other artificial mixing devices. To demonstrate the universality of this parameter, LN values are used to describe the extent of deep mixing and are compared with changes in DO concentrations in three reservoirs in Australia and four lakes in the U.S.A., which vary in productivity and mixing regimes. A simple model is developed which relates changes in LN values, i.e., the extent of mixing, to changes in near bottom DO concentrations. After calibrating the model for a specific system, it is possible to use real-time LN values, calculated using water temperature profiles and surface wind velocities, to estimate changes in DO concentrations (assuming unchanged trophic conditions).

  7. Indirect enzyme-linked immunosorbent assay for the quantitative estimation of lysergic acid diethylamide in urine.

    PubMed

    Kerrigan, S; Brooks, D E

    1998-05-01

    A new antibody to lysergic acid diethylamide (LSD) was used to develop a novel indirect ELISA for the quantification of drug in urine. Evaluation of the new assay with the commercially available LSD ELISA (STC Diagnostics) shows improved performance. The test requires 50 microL of urine, which is used to measure concentrations of drug in the microg/L to ng/L range. The limit of detection was 8 ng/L compared with 85 ng/L in the commercial assay, and analytical recoveries were 98-106%. Our test detected 0.1 microg/L of LSD in urine with an intraassay CV of 2.4% (n = 8) compared with 6.0% for a 0.5 microg/L sample in the commercial assay (n = 20). The upper and lower limits of quantification were estimated to be 7 microg/L and 50 ng/L, respectively. Specificity was evaluated by measuring the extent of cross-reactivity with 24 related substances. Drug determination using the new assay offers both improved sensitivity and precision compared with existing methods, thus facilitating the preliminary quantitative estimation of LSD in urine at lower concentrations with a greater degree of certainty.

  8. SHAVE: shrinkage estimator measured for multiple visits increases power in GWAS of quantitative traits

    PubMed Central

    Meirelles, Osorio D; Ding, Jun; Tanaka, Toshiko; Sanna, Serena; Yang, Hsih-Te; Dudekula, Dawood B; Cucca, Francesco; Ferrucci, Luigi; Abecasis, Goncalo; Schlessinger, David

    2013-01-01

    Measurement error and biological variability generate distortions in quantitative phenotypic data. In longitudinal studies with repeated measurements, the multiple measurements provide a route to reduce noise and correspondingly increase the strength of signals in genome-wide association studies (GWAS).To optimize noise correction, we have developed Shrunken Average (SHAVE), an approach using a Bayesian Shrinkage estimator. This estimator uses regression toward the mean for every individual as a function of (1) their average across visits; (2) their number of visits; and (3) the correlation between visits. Computer simulations support an increase in power, with results very similar to those expected by the assumptions of the model. The method was applied to a real data set for 14 anthropomorphic traits in ∼6000 individuals enrolled in the SardiNIA project, with up to three visits (measurements) for each participant. Results show that additional measurements have a large impact on the strength of GWAS signals, especially when participants have different number of visits, with SHAVE showing a clear increase in power relative to single visits. In addition, we have derived a relation to assess the improvement in power as a function of number of visits and correlation between visits. It can also be applied in the optimization of experimental designs or usage of measuring devices. SHAVE is fast and easy to run, written in R and freely available online. PMID:23092954

  9. Estimation of transgene copy number in transformed citrus plants by quantitative multiplex real-time PCR.

    PubMed

    Omar, Ahmad A; Dekkers, Marty G H; Graham, James H; Grosser, Jude W

    2008-01-01

    Quantitative real-time PCR (qRT-PCR) was adapted to estimate transgene copy number of the rice Xa21 gene in transgenic citrus plants. This system used TaqMan qRT-PCR and the endogenous citrus gene encoding for lipid transfer protein (LTP). Transgenic "Hamlin" sweet orange plants were generated using two different protoplast-GFP transformation systems: cotransformation and single plasmid transformation. A dilution series of genomic DNA from one of the transgenic lines was used to generate a standard curve for the endogenous LTP and the transgene Xa21. This standard curve was used for relative quantification of the endogenous gene and the transgene. Copy numbers of the transgene Xa21 detected from qRT-PCR analysis correlated with that from Southern blot analysis (r = 0.834). Thus, qRT-PCR is an efficient means of estimating copy number in transgenic citrus plants. This analysis can be performed at much earlier stages of transgenic plant development than southern blot analysis, which expedites investigation of transgenes in slow-growing woody plants.

  10. Quantitative estimation of gymnemagenin in Gymnema sylvestre extract and its marketed formulations using the HPLC-ESI-MS/MS method.

    PubMed

    Kamble, Bhagyashree; Gupta, Ankur; Patil, Dada; Janrao, Shirish; Khatal, Laxman; Duraiswamy, B

    2013-02-01

    Gymnema sylvestre, with gymnemic acids as active pharmacological constituents, is a popular ayurvedic herb and has been used to treat diabetes, as a remedy for cough and as a diuretic. However, very few analytical methods are available for quality control of this herb and its marketed formulations. To develop and validate a new, rapid, sensitive and selective HPLC-ESI (electrospray ionisation)-MS/MS method for quantitative estimation of gymnemagenin in G. sylvestre and its marketed formulations. HPLC-ESI-MS/MS method using a multiple reactions monitoring mode was used for quantitation of gymnemagenin. Separation was carried out on a Luna C-18 column using gradient elution of water and methanol (with 0.1% formic acid and 0.3% ammonia). The developed method was validated as per International Conference on Harmonisation Guideline ICH-Q2B and found to be accurate, precise and linear over a relatively wide range of concentrations (5.280-305.920 ng/mL). Gymnemagenin contents were found from 0.056 ± 0.002 to 4.77 ± 0.59% w/w in G. sylvestre and its marketed formulations. The method established is simple, rapid, with high sample throughput, and can be used as a tool for quality control of G. sylvestre and its formulations. Copyright © 2012 John Wiley & Sons, Ltd.

  11. A method for estimating the effective number of loci affecting a quantitative character.

    PubMed

    Slatkin, Montgomery

    2013-11-01

    A likelihood method is introduced that jointly estimates the number of loci and the additive effect of alleles that account for the genetic variance of a normally distributed quantitative character in a randomly mating population. The method assumes that measurements of the character are available from one or both parents and an arbitrary number of full siblings. The method uses the fact, first recognized by Karl Pearson in 1904, that the variance of a character among offspring depends on both the parental phenotypes and on the number of loci. Simulations show that the method performs well provided that data from a sufficient number of families (on the order of thousands) are available. This method assumes that the loci are in Hardy-Weinberg and linkage equilibrium but does not assume anything about the linkage relationships. It performs equally well if all loci are on the same non-recombining chromosome provided they are in linkage equilibrium. The method can be adapted to take account of loci already identified as being associated with the character of interest. In that case, the method estimates the number of loci not already known to affect the character. The method applied to measurements of crown-rump length in 281 family trios in a captive colony of African green monkeys (Chlorocebus aethiopus sabaeus) estimates the number of loci to be 112 and the additive effect to be 0.26 cm. A parametric bootstrap analysis shows that a rough confidence interval has a lower bound of 14 loci. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Improved quantitative visualization of hypervelocity flow through wavefront estimation based on shadow casting of sinusoidal gratings.

    PubMed

    Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2016-08-01

    A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.

  13. Quantitative estimate of commercial fish enhancement by seagrass habitat in southern Australia

    NASA Astrophysics Data System (ADS)

    Blandon, Abigayil; zu Ermgassen, Philine S. E.

    2014-03-01

    Seagrass provides many ecosystem services that are of considerable value to humans, including the provision of nursery habitat for commercial fish stock. Yet few studies have sought to quantify these benefits. As seagrass habitat continues to suffer a high rate of loss globally and with the growing emphasis on compensatory restoration, valuation of the ecosystem services associated with seagrass habitat is increasingly important. We undertook a meta-analysis of juvenile fish abundance at seagrass and control sites to derive a quantitative estimate of the enhancement of juvenile fish by seagrass habitats in southern Australia. Thirteen fish of commercial importance were identified as being recruitment enhanced in seagrass habitat, twelve of which were associated with sufficient life history data to allow for estimation of total biomass enhancement. We applied von Bertalanffy growth models and species-specific mortality rates to the determined values of juvenile enhancement to estimate the contribution of seagrass to commercial fish biomass. The identified species were enhanced in seagrass by 0.98 kg m-2 y-1, equivalent to ˜$A230,000 ha-1 y-1. These values represent the stock enhancement where all fish species are present, as opposed to realized catches. Having accounted for the time lag between fish recruiting to a seagrass site and entering the fishery and for a 3% annual discount rate, we find that seagrass restoration efforts costing $A10,000 ha-1 have a potential payback time of less than five years, and that restoration costing $A629,000 ha-1 can be justified on the basis of enhanced commercial fish recruitment where these twelve fish species are present.

  14. SU-F-I-33: Estimating Radiation Dose in Abdominal Fat Quantitative CT

    SciTech Connect

    Li, X; Yang, K; Liu, B

    2016-06-15

    Purpose: To compare size-specific dose estimate (SSDE) in abdominal fat quantitative CT with another dose estimate D{sub size,L} that also takes into account scan length. Methods: This study complied with the requirements of the Health Insurance Portability and Accountability Act. At our institution, abdominal fat CT is performed with scan length = 1 cm and CTDI{sub vol} = 4.66 mGy (referenced to body CTDI phantom). A previously developed CT simulation program was used to simulate single rotation axial scans of 6–55 cm diameter water cylinders, and dose integral of the longitudinal dose profile over the central 1 cm length was used to predict the dose at the center of one-cm scan range. SSDE and D{sub size,L} were assessed for 182 consecutive abdominal fat CT examinations with mean water-equivalent diameter (WED) of 27.8 cm ± 6.0 (range, 17.9 - 42.2 cm). Patient age ranged from 18 to 75 years, and weight ranged from 39 to 163 kg. Results: Mean SSDE was 6.37 mGy ± 1.33 (range, 3.67–8.95 mGy); mean D{sub size,L} was 2.99 mGy ± 0.85 (range, 1.48 - 4.88 mGy); and mean D{sub size,L}/SSDE ratio was 0.46 ± 0.04 (range, 0.40 - 0.55). Conclusion: The conversion factors for size-specific dose estimate in AAPM Report No. 204 were generated using 15 - 30 cm scan lengths. One needs to be cautious in applying SSDE to small length CT scans. For abdominal fat CT, SSDE was 80–150% higher than the dose of 1 cm scan length.

  15. Improving satellite quantitative precipitation estimates by incorporating deep convective cloud optical depth

    NASA Astrophysics Data System (ADS)

    Stenz, Ronald D.

    As Deep Convective Systems (DCSs) are responsible for most severe weather events, increased understanding of these systems along with more accurate satellite precipitation estimates will improve NWS (National Weather Service) warnings and monitoring of hazardous weather conditions. A DCS can be classified into convective core (CC) regions (heavy rain), stratiform (SR) regions (moderate-light rain), and anvil (AC) regions (no rain). These regions share similar infrared (IR) brightness temperatures (BT), which can create large errors for many existing rain detection algorithms. This study assesses the performance of the National Mosaic and Multi-sensor Quantitative Precipitation Estimation System (NMQ) Q2, and a simplified version of the GOES-R Rainfall Rate algorithm (also known as the Self-Calibrating Multivariate Precipitation Retrieval, or SCaMPR), over the state of Oklahoma (OK) using OK MESONET observations as ground truth. While the average annual Q2 precipitation estimates were about 35% higher than MESONET observations, there were very strong correlations between these two data sets for multiple temporal and spatial scales. Additionally, the Q2 estimated precipitation distributions over the CC, SR, and AC regions of DCSs strongly resembled the MESONET observed ones, indicating that Q2 can accurately capture the precipitation characteristics of DCSs although it has a wet bias . SCaMPR retrievals were typically three to four times higher than the collocated MESONET observations, with relatively weak correlations during a year of comparisons in 2012. Overestimates from SCaMPR retrievals that produced a high false alarm rate were primarily caused by precipitation retrievals from the anvil regions of DCSs when collocated MESONET stations recorded no precipitation. A modified SCaMPR retrieval algorithm, employing both cloud optical depth and IR temperature, has the potential to make significant improvements to reduce the SCaMPR false alarm rate of retrieved

  16. Anticancer Activity of Estradiol Derivatives: A Quantitative Structure--Activity Relationship Approach

    NASA Astrophysics Data System (ADS)

    Muranaka, Ken

    2001-10-01

    Commercial packages to implement modern QSAR (quantitative structure-activity relationship) techniques are highly priced; however, the essence of QSAR can be taught without them. Microsoft Excel was used to analyze published data on anticancer activities of estradiol analogs by a QSAR approach. The resulting QSAR equations highly correlate the structural features and physicochemical properties of the analogs with the observed biological activities by multiple linear regression.

  17. Quantitative phase imaging technologies to assess neuronal activity (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Thouvenin, Olivier; Fink, Mathias; Boccara, Claude

    2016-03-01

    Active neurons tends to have a different dynamical behavior compared to resting ones. Non-exhaustively, vesicular transport towards the synapses is increased, since axonal growth becomes slower. Previous studies also reported small phase variations occurring simultaneously with the action potential. Such changes exhibit times scales ranging from milliseconds to several seconds on spatial scales smaller than the optical diffraction limit. Therefore, QPI systems are of particular interest to measure neuronal activity without labels. Here, we report the development of two new QPI systems that should enable the detection of such activity. Both systems can acquire full field phase images with a sub nanometer sensitivity at a few hundreds of frames per second. The first setup is a synchronous combination of Full Field Optical Coherence Tomography (FF-OCT) and Fluorescence wide field imaging. The latter modality enables the measurement of neurons electrical activity using calcium indicators. In cultures, FF-OCT exhibits similar features to Digital Holographic Microscopy (DHM), except from complex computational reconstruction. However, FF-OCT is of particular interest in order to measure phase variations in tissues. The second setup is based on a Quantitative Differential Interference Contrast setup mounted in an epi-illumination configuration with a spectrally incoherent illumination. Such a common path interferometer exhibits a very good mechanical stability, and thus enables the measurement of phase images during hours. Additionally, such setup can not only measure a height change, but also an optical index change for both polarization. Hence, one can measure simultaneously a phase change and a birefringence change.

  18. Quantitative coronary angiography using image recovery techniques for background estimation in unsubtracted images

    SciTech Connect

    Wong, Jerry T.; Kamyar, Farzad; Molloi, Sabee

    2007-10-15

    Densitometry measurements have been performed previously using subtracted images. However, digital subtraction angiography (DSA) in coronary angiography is highly susceptible to misregistration artifacts due to the temporal separation of background and target images. Misregistration artifacts due to respiration and patient motion occur frequently, and organ motion is unavoidable. Quantitative densitometric techniques would be more clinically feasible if they could be implemented using unsubtracted images. The goal of this study is to evaluate image recovery techniques for densitometry measurements using unsubtracted images. A humanoid phantom and eight swine (25-35 kg) were used to evaluate the accuracy and precision of the following image recovery techniques: Local averaging (LA), morphological filtering (MF), linear interpolation (LI), and curvature-driven diffusion image inpainting (CDD). Images of iodinated vessel phantoms placed over the heart of the humanoid phantom or swine were acquired. In addition, coronary angiograms were obtained after power injections of a nonionic iodinated contrast solution in an in vivo swine study. Background signals were estimated and removed with LA, MF, LI, and CDD. Iodine masses in the vessel phantoms were quantified and compared to known amounts. Moreover, the total iodine in left anterior descending arteries was measured and compared with DSA measurements. In the humanoid phantom study, the average root mean square errors associated with quantifying iodine mass using LA and MF were approximately 6% and 9%, respectively. The corresponding average root mean square errors associated with quantifying iodine mass using LI and CDD were both approximately 3%. In the in vivo swine study, the root mean square errors associated with quantifying iodine in the vessel phantoms with LA and MF were approximately 5% and 12%, respectively. The corresponding average root mean square errors using LI and CDD were both 3%. The standard deviations

  19. Improved activity estimation with MC-JOSEM versus TEW-JOSEM in 111In SPECT.

    PubMed

    Ouyang, Jinsong; El Fakhri, Georges; Moore, Stephen C

    2008-05-01

    We have previously developed a fast Monte Carlo (MC)-based joint ordered-subset expectation maximization (JOSEM) iterative reconstruction algorithm, MC-JOSEM. A phantom study was performed to compare quantitative imaging performance of MC-JOSEM with that of a triple-energy-window approach (TEW) in which estimated scatter was also included additively within JOSEM, TEW-JOSEM. We acquired high-count projections of a 5.5 cm3 sphere of 111In at different locations in the water-filled torso phantom; high-count projections were then obtained with 111In only in the liver or only in the soft-tissue background compartment, so that we could generate synthetic projections for spheres surrounded by various activity distributions. MC scatter estimates used by MC-JOSEM were computed once after five iterations of TEW-JOSEM. Images of different combinations of liver/background and sphere/background activity concentration ratios were reconstructed by both TEW-JOSEM and MC-JOSEM for 40 iterations. For activity estimation in the sphere, MC-JOSEM always produced better relative bias and relative standard deviation than TEW-JOSEM for each sphere location, iteration number, and activity combination. The average relative bias of activity estimates in the sphere for MC-JOSEM after 40 iterations was -6.9%, versus -15.8% for TEW-JOSEM, while the average relative standard deviation of the sphere activity estimates was 16.1% for MC-JOSEM, versus 27.4% for TEW-JOSEM. Additionally, the average relative bias of activity concentration estimates in the liver and the background for MC-JOSEM after 40 iterations was -3.9%, versus -12.2% for TEW-JOSEM, while the average relative standard deviation of these estimates was 2.5% for MC-JOSEM, versus 3.4% for TEW-JOSEM. MC-JOSEM is a promising approach for quantitative activity estimation in 111In SPECT.

  20. Improving high-resolution quantitative precipitation estimation via fusion of multiple radar-based precipitation products

    NASA Astrophysics Data System (ADS)

    Rafieeinasab, Arezoo; Norouzi, Amir; Seo, Dong-Jun; Nelson, Brian

    2015-12-01

    For monitoring and prediction of water-related hazards in urban areas such as flash flooding, high-resolution hydrologic and hydraulic modeling is necessary. Because of large sensitivity and scale dependence of rainfall-runoff models to errors in quantitative precipitation estimates (QPE), it is very important that the accuracy of QPE be improved in high-resolution hydrologic modeling to the greatest extent possible. With the availability of multiple radar-based precipitation products in many areas, one may now consider fusing them to produce more accurate high-resolution QPE for a wide spectrum of applications. In this work, we formulate and comparatively evaluate four relatively simple procedures for such fusion based on Fisher estimation and its conditional bias-penalized variant: Direct Estimation (DE), Bias Correction (BC), Reduced-Dimension Bias Correction (RBC) and Simple Estimation (SE). They are applied to fuse the Multisensor Precipitation Estimator (MPE) and radar-only Next Generation QPE (Q2) products at the 15-min 1-km resolution (Experiment 1), and the MPE and Collaborative Adaptive Sensing of the Atmosphere (CASA) QPE products at the 15-min 500-m resolution (Experiment 2). The resulting fused estimates are evaluated using the 15-min rain gauge observations from the City of Grand Prairie in the Dallas-Fort Worth Metroplex (DFW) in north Texas. The main criterion used for evaluation is that the fused QPE improves over the ingredient QPEs at their native spatial resolutions, and that, at the higher resolution, the fused QPE improves not only over the ingredient higher-resolution QPE but also over the ingredient lower-resolution QPE trivially disaggregated using the ingredient high-resolution QPE. All four procedures assume that the ingredient QPEs are unbiased, which is not likely to hold true in reality even if real-time bias correction is in operation. To test robustness under more realistic conditions, the fusion procedures were evaluated with and

  1. Quantitative structure-activity relationships for organophosphates binding to acetylcholinesterase.

    PubMed

    Ruark, Christopher D; Hack, C Eric; Robinson, Peter J; Anderson, Paul E; Gearhart, Jeffery M

    2013-02-01

    Organophosphates are a group of pesticides and chemical warfare nerve agents that inhibit acetylcholinesterase, the enzyme responsible for hydrolysis of the excitatory neurotransmitter acetylcholine. Numerous structural variants exist for this chemical class, and data regarding their toxicity can be difficult to obtain in a timely fashion. At the same time, their use as pesticides and military weapons is widespread, which presents a major concern and challenge in evaluating human toxicity. To address this concern, a quantitative structure-activity relationship (QSAR) was developed to predict pentavalent organophosphate oxon human acetylcholinesterase bimolecular rate constants. A database of 278 three-dimensional structures and their bimolecular rates was developed from 15 peer-reviewed publications. A database of simplified molecular input line entry notations and their respective acetylcholinesterase bimolecular rate constants are listed in Supplementary Material, Table I. The database was quite diverse, spanning 7 log units of activity. In order to describe their structure, 675 molecular descriptors were calculated using AMPAC 8.0 and CODESSA 2.7.10. Orthogonal projection to latent structures regression, bootstrap leave-random-many-out cross-validation and y-randomization were used to develop an externally validated consensus QSAR model. The domain of applicability was assessed by the William's plot. Six external compounds were outside the warning leverage indicating potential model extrapolation. A number of compounds had residuals >2 or <-2, indicating potential outliers or activity cliffs. The results show that the HOMO-LUMO energy gap contributed most significantly to the binding affinity. A mean training R (2) of 0.80, a mean test set R (2) of 0.76 and a consensus external test set R (2) of 0.66 were achieved using the QSAR. The training and external test set RMSE values were found to be 0.76 and 0.88. The results suggest that this QSAR model can be used in

  2. Quantitative estimation of undiscovered mineral resources - a case study of US Forest Service Wilderness tracts in the Pacific Mountain system.

    USGS Publications Warehouse

    Drew, L.J.

    1986-01-01

    The need by land managers and planners for more quantitative measures of mineral values has prompted scientists at the U.S. Geological Survey to test a probabilistic method of mineral resource assessment on a portion of the wilderness lands that have been studied during the past 20 years. A quantitative estimate of undiscovered mineral resources is made by linking the techniques of subjective estimation, geologic mineral deposit models, and Monte Carlo simulation. The study considers 91 U.S. Forest Service wilderness tracts in California, Nevada, Oregon, and Washington. -from Authors

  3. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding.

    PubMed

    Smith, Eric G

    2014-01-01

      Nonrandomized studies typically cannot account for confounding from unmeasured factors.    A method is presented that exploits the recently-identified phenomenon of  "confounding amplification" to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome.   Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method's requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding.   Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method's estimates (although bootstrapping is one plausible approach).   To this author's knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward.  The method's routine usefulness, however, has

  4. Quantitative estimation of the viability of Toxoplasma gondii oocysts in soil.

    PubMed

    Lélu, Maud; Villena, Isabelle; Dardé, Marie-Laure; Aubert, Dominique; Geers, Régine; Dupuis, Emilie; Marnef, Francine; Poulle, Marie-Lazarine; Gotteland, Cécile; Dumètre, Aurélien; Gilot-Fromont, Emmanuelle

    2012-08-01

    Toxoplasma gondii oocysts spread in the environment are an important source of toxoplasmosis for humans and animal species. Although the life expectancy of oocysts has been studied through the infectivity of inoculated soil samples, the survival dynamics of oocysts in the environment are poorly documented. The aim of this study was to quantify oocyst viability in soil over time under two rain conditions. Oocysts were placed in 54 sentinel chambers containing soil and 18 sealed water tubes, all settled in two containers filled with soil. Containers were watered to simulate rain levels of arid and wet climates and kept at stable temperature for 21.5 months. At nine sampling dates during this period, we sampled six chambers and two water tubes. Three methods were used to measure oocyst viability: microscopic counting, quantitative PCR (qPCR), and mouse inoculation. In parallel, oocysts were kept refrigerated during the same period to analyze their detectability over time. Microscopic counting, qPCR, and mouse inoculation all showed decreasing values over time and highly significant differences between the decreases under dry and damp conditions. The proportion of oocysts surviving after 100 days was estimated to be 7.4% (95% confidence interval [95% CI] = 5.1, 10.8) under dry conditions and 43.7% (5% CI = 35.6, 53.5) under damp conditions. The detectability of oocysts by qPCR over time decreased by 0.5 cycle threshold per 100 days. Finally, a strong correlation between qPCR results and the dose infecting 50% of mice was found; thus, qPCR results may be used as an estimate of the infectivity of soil samples.

  5. Quantitative Estimation of the Viability of Toxoplasma gondii Oocysts in Soil

    PubMed Central

    Villena, Isabelle; Dardé, Marie-Laure; Aubert, Dominique; Geers, Régine; Dupuis, Emilie; Marnef, Francine; Poulle, Marie-Lazarine; Gotteland, Cécile; Dumètre, Aurélien

    2012-01-01

    Toxoplasma gondii oocysts spread in the environment are an important source of toxoplasmosis for humans and animal species. Although the life expectancy of oocysts has been studied through the infectivity of inoculated soil samples, the survival dynamics of oocysts in the environment are poorly documented. The aim of this study was to quantify oocyst viability in soil over time under two rain conditions. Oocysts were placed in 54 sentinel chambers containing soil and 18 sealed water tubes, all settled in two containers filled with soil. Containers were watered to simulate rain levels of arid and wet climates and kept at stable temperature for 21.5 months. At nine sampling dates during this period, we sampled six chambers and two water tubes. Three methods were used to measure oocyst viability: microscopic counting, quantitative PCR (qPCR), and mouse inoculation. In parallel, oocysts were kept refrigerated during the same period to analyze their detectability over time. Microscopic counting, qPCR, and mouse inoculation all showed decreasing values over time and highly significant differences between the decreases under dry and damp conditions. The proportion of oocysts surviving after 100 days was estimated to be 7.4% (95% confidence interval [95% CI] = 5.1, 10.8) under dry conditions and 43.7% (5% CI = 35.6, 53.5) under damp conditions. The detectability of oocysts by qPCR over time decreased by 0.5 cycle threshold per 100 days. Finally, a strong correlation between qPCR results and the dose infecting 50% of mice was found; thus, qPCR results may be used as an estimate of the infectivity of soil samples. PMID:22582074

  6. Improving Satellite Quantitative Precipitation Estimation Using GOES-Retrieved Cloud Optical Depth

    SciTech Connect

    Stenz, Ronald; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kuligowski, Robert J.

    2016-02-01

    To address significant gaps in ground-based radar coverage and rain gauge networks in the U.S., geostationary satellite quantitative precipitation estimates (QPEs) such as the Self-Calibrating Multivariate Precipitation Retrievals (SCaMPR) can be used to fill in both the spatial and temporal gaps of ground-based measurements. Additionally, with the launch of GOES-R, the temporal resolution of satellite QPEs may be comparable to that of Weather Service Radar-1988 Doppler (WSR-88D) volume scans as GOES images will be available every five minutes. However, while satellite QPEs have strengths in spatial coverage and temporal resolution, they face limitations particularly during convective events. Deep Convective Systems (DCSs) have large cloud shields with similar brightness temperatures (BTs) over nearly the entire system, but widely varying precipitation rates beneath these clouds. Geostationary satellite QPEs relying on the indirect relationship between BTs and precipitation rates often suffer from large errors because anvil regions (little/no precipitation) cannot be distinguished from rain-cores (heavy precipitation) using only BTs. However, a combination of BTs and optical depth (τ) has been found to reduce overestimates of precipitation in anvil regions (Stenz et al. 2014). A new rain mask algorithm incorporating both τ and BTs has been developed, and its application to the existing SCaMPR algorithm was evaluated. The performance of the modified SCaMPR was evaluated using traditional skill scores and a more detailed analysis of performance in individual DCS components by utilizing the Feng et al. (2012) classification algorithm. SCaMPR estimates with the new rain mask applied benefited from significantly reduced overestimates of precipitation in anvil regions and overall improvements in skill scores.

  7. Estimating the Quantitative Demand of NOAC Antidote Doses on Stroke Units.

    PubMed

    Pfeilschifter, Waltraud; Farahmand, Dana; Niemann, Daniela; Ikenberg, Benno; Hohmann, Carina; Abruscato, Mario; Thonke, Sven; Strzelczyk, Adam; Hedtmann, Günther; Neumann-Haefelin, Tobias; Kollmar, Rainer; Singer, Oliver C; Ferbert, Andreas; Steiner, Thorsten; Steinmetz, Helmuth; Reihs, Anke; Misselwitz, Björn; Foerch, Christian

    2016-01-01

    The first specific antidote for non-vitamin K antagonist oral anticoagulants (NOAC) has recently been approved. NOAC antidotes will allow specific treatment for 2 hitherto problematic patient groups: patients with oral anticoagulant therapy (OAT)-associated intracerebral hemorrhage (ICH) and maybe also thrombolysis candidates presenting on oral anticoagulation (OAT). We aimed to estimate the frequency of these events and hence the quantitative demand of antidote doses on a stroke unit. We extracted data of patients with acute ischemic stroke and ICH (<24 h after symptom onset) in the years 2012-2015 from a state-wide prospective stroke inpatient registry. We selected 8 stroke units and determined the mode of OAT upon admission in 2012-2013. In 2015, the mode of OAT became a mandatory item of the inpatient registry. From the number of anticoagulated patients and the NOAC share, we estimated the current and future demand for NOAC antidote doses on stroke units. Eighteen percent of ICH patients within 6 h of symptom onset or an unknown symptom onset were on OAT. Given a NOAC share at admission of 40%, about 7% of all ICH patients may qualify for NOAC reversal therapy. Thirteen percent of ischemic stroke patients admitted within 4 h presented on anticoagulation. Given the availability of an appropriate antidote, a NOAC share of 50% could lead to a 6.1% increase in thrombolysis rate. Stroke units serving populations with a comparable demographic structure should prepare to treat up to 1% of all acute ischemic stroke patients and 7% of all acute ICH patients with NOAC antidotes. These numbers may increase with the mounting prevalence of atrial fibrillation and an increasing use of NOAC. © 2016 S. Karger AG, Basel.

  8. Logarithmic quantitation model using serum ferritin to estimate iron overload in secondary haemochromatosis.

    PubMed

    Güngör, T; Rohrbach, E; Solem, E; Kaltwasser, J P; Kornhuber, B

    1996-04-01

    Nineteen children and adolescents receiving repeated transfusions and subcutaneous desferrioxamine treatment were investigated in an attempt to quantitate iron overload non-invasively. Before patients were started on desferrioxamine individual relationships were correlated for 12 to 36 months between transfused iron, absorbed iron estimated gastrointestinally, and increasing serum ferritin concentrations. Patients with inflammation, increased liver enzymes, or haemolysis were excluded from analysis. The relationship between the variables could be described by a logarithmic regression curve (y = transfused iron [plus eventually gastrointestinally absorbed iron] = iron overload = a+b log [x = serum ferritin]) for each individual patient. All patients showed close correlation (R2) between x and y (median R2 of 0.909, 0.98, and 0.92 in thalassaemia, aplastic anaemia, and sickle cell anaemia patients, respectively). When started on desferrioxamine, current serum ferritin concentrations were used to derive the iron overload from each individual regression curve. The derived estimated iron overload ranged from 0.6 g to 31 g. Left ventricular dilatation was observed in three patients with beta thalassaemia and in one patient with aplastic anaemia with median iron overload of 20.7 (14.1-31.3) g and 24.0 g respectively. Hypothyroidism was found in four patients with beta thalassaemia and one patient with aplastic anaemia with iron overload between 14.7 (6.8 and 26.1) g and 15.1 g respectively. Human growth hormone deficiency was detected in three patients with beta thalassaemia with an iron overload of 4.2 (3.5-6.8) g; all three patients had excellent desferrioxamine compliance.

  9. Revised activation estimates for silicon carbide

    SciTech Connect

    Heinisch, H.L.; Cheng, E.T.; Mann, F.M.

    1996-10-01

    Recent progress in nuclear data development for fusion energy systems includes a reevaluation of neutron activation cross sections for silicon and aluminum. Activation calculations using the newly compiled Fusion Evaluated Nuclear Data Library result in calculated levels of {sup 26}Al in irradiated silicon that are about an order of magnitude lower than the earlier calculated values. Thus, according to the latest internationally accepted nuclear data, SiC is much more attractive as a low activation material, even in first wall applications.

  10. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  11. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-01

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. We estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc-Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range of fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc-Zn. The recently generated pseudopotentials of Krogel et al. [Phys. Rev. B 93, 075143 (2016)] reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. [J. Chem. Phys. 129, 164115 (2008)] by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. For the Sc-Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.

  12. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade.

    PubMed

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-22

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  13. Simultaneous estimation of multiple quantitative trait loci and growth curve parameters through hierarchical Bayesian modeling

    PubMed Central

    Sillanpää, M J; Pikkuhookana, P; Abrahamsson, S; Knürr, T; Fries, A; Lerceteau, E; Waldmann, P; García-Gil, M R

    2012-01-01

    A novel hierarchical quantitative trait locus (QTL) mapping method using a polynomial growth function and a multiple-QTL model (with no dependence in time) in a multitrait framework is presented. The method considers a population-based sample where individuals have been phenotyped (over time) with respect to some dynamic trait and genotyped at a given set of loci. A specific feature of the proposed approach is that, instead of an average functional curve, each individual has its own functional curve. Moreover, each QTL can modify the dynamic characteristics of the trait value of an individual through its influence on one or more growth curve parameters. Apparent advantages of the approach include: (1) assumption of time-independent QTL and environmental effects, (2) alleviating the necessity for an autoregressive covariance structure for residuals and (3) the flexibility to use variable selection methods. As a by-product of the method, heritabilities and genetic correlations can also be estimated for individual growth curve parameters, which are considered as latent traits. For selecting trait-associated loci in the model, we use a modified version of the well-known Bayesian adaptive shrinkage technique. We illustrate our approach by analysing a sub sample of 500 individuals from the simulated QTLMAS 2009 data set, as well as simulation replicates and a real Scots pine (Pinus sylvestris) data set, using temporal measurements of height as dynamic trait of interest. PMID:21792229

  14. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    PubMed

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    PubMed Central

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-01-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1–3.9 ppm or 3–9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990–2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations. PMID:27329411

  16. The estimation of quantitative parameters of oligonucleotides immobilization on mica surface

    NASA Astrophysics Data System (ADS)

    Sharipov, T. I.; Bakhtizin, R. Z.

    2017-05-01

    Immobilization of nucleic acids on the surface of various materials is increasingly being used in research and some practical applications. Currently, the DNA chip technology is rapidly developing. The basis of the immobilization process can be both physical adsorption and chemisorption. A useful way to control the immobilization of nucleic acids on a surface is to use atomic force microscopy. It allows you to investigate the topography of the surface by its direct imaging with high resolution. Usually, to fix the DNA on the surface of mica are used cations which mediate the interaction between the mica surface and the DNA molecules. In our work we have developed a method for estimation of quantitative parameter of immobilization of oligonucleotides is their degree of aggregation depending on the fixation conditions on the surface of mica. The results on study of aggregation of oligonucleotides immobilized on mica surface will be presented. The single oligonucleotides molecules have been imaged clearly, whereas their surface areas have been calculated and calibration curve has been plotted.

  17. Quantitative Simulations of MST Visual Receptive Field Properties Using a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, J. A.

    1997-01-01

    We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.

  18. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    NASA Astrophysics Data System (ADS)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  19. A Novel Two-Step Hierarchial Quantitative Structure-Activity ...

    EPA Pesticide Factsheets

    Background: Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Methods and results: A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET ; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments) . The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models t

  20. Improving quantitative structure-activity relationships through multiobjective optimization.

    PubMed

    Nicolotti, Orazio; Giangreco, Ilenia; Miscioscia, Teresa Fabiola; Carotti, Angelo

    2009-10-01

    A multiobjective optimization algorithm was proposed for the automated integration of structure- and ligand-based molecular design. Driven by a genetic algorithm, the herein proposed approach enabled the detection of a number of trade-off QSAR models accounting simultaneously for two independent objectives. The first was biased toward best regressions among docking scores and biological affinities; the second minimized the atom displacements from a properly established crystal-based binding topology. Based on the concept of dominance, 3D QSAR equivalent models profiled the Pareto frontier and were, thus, designated as nondominated solutions of the search space. K-means clustering was, then, operated to select a representative subset of the available trade-off models. These were effectively subjected to GRID/GOLPE analyses for quantitatively featuring molecular determinants of ligand binding affinity. More specifically, it was demonstrated that a) diverse binding conformations occurred on the basis of the ligand ability to profitably contact different part of protein binding site; b) enzyme selectivity was better approached and interpreted by combining diverse equivalent models; and c) trade-off models were successful and even better than docking virtual screening, in retrieving at high sensitivity active hits from a large pool of chemically similar decoys. The approach was tested on a large series, very well-known to QSAR practitioners, of 3-amidinophenylalanine inhibitors of thrombin and trypsin, two serine proteases having rather different biological actions despite a high sequence similarity.

  1. A note on estimating the posterior density of a quantitative trait locus from a Markov chain Monte Carlo sample.

    PubMed

    Hoti, Fabian J; Sillanpää, Mikko J; Holmström, Lasse

    2002-04-01

    We provide an overview of the use of kernel smoothing to summarize the quantitative trait locus posterior distribution from a Markov chain Monte Carlo sample. More traditional distributional summary statistics based on the histogram depend both on the bin width and on the sideway shift of the bin grid used. These factors influence both the overall mapping accuracy and the estimated location of the mode of the distribution. Replacing the histogram by kernel smoothing helps to alleviate these problems. Using simulated data, we performed numerical comparisons between the two approaches. The results clearly illustrate the superiority of the kernel method. The kernel approach is particularly efficient when one needs to point out the best putative quantitative trait locus position on the marker map. In such situations, the smoothness of the posterior estimate is especially important because rough posterior estimates easily produce biased mode estimates. Different kernel implementations are available from Rolf Nevanlinna Institute's web page (http://www.rni.helsinki.fi/;fjh).

  2. Mycobactericidal activity of selected disinfectants using a quantitative suspension test.

    PubMed

    Griffiths, P A; Babb, J R; Fraise, A P

    1999-02-01

    In this study, a quantitative suspension test carried out under both clean and dirty conditions was used to assess the activity of various instrument and environmental disinfectants against the type strain NCTC 946 and an endoscope washer disinfector isolate of Mycobacterium chelonae, Mycobacterium fortuitum NCTC 10,394, Mycobacterium tuberculosis H37 Rv NCTC 7416 and a clinical isolate of Mycobacterium avium-intracellulare (MAI). The disinfectants tested were; a chlorine releasing agent, sodium dichloroisocyanurate (NaDCC) at 1000 ppm and 10,000 ppm av Cl; chlorine dioxide at 1100 ppm av ClO2 (Tristel, MediChem International Limited); 70% industrial methylated spirits (IMS); 2% alkaline glutaraldehyde (Asep, Galan); 10% succinedialdehyde and formaldehyde mixture (Gigasept, Schulke & Mayr); 0.35% peracetic acid (NuCidex, Johnson & Johnson); and a peroxygen compound at 1% and 3% (Virkon, Antec International). Results showed that the clinical isolate of MAI was much more resistant than M. tuberculosis to all the disinfectants, while the type strains of M. chelonae and M. fortuitum were far more sensitive. The washer disinfector isolate of M. chelonae was extremely resistant to 2% alkaline activated glutaraldehyde and appeared to be slightly more resistant than the type strain to Nu-Cidex, Gigasept, Virkon and the lower concentration of NaDCC. This study has shown peracetic acid (Nu-Cidex), chlorine dioxide (Tristel), alcohol (IMS) and high concentrations of a chlorine releasing agent (NaDCC) are rapidly mycobactericidal. Glutaraldehyde, although effective, is a slow mycobactericide. Gigasept and Virkon are poor mycobactericidal agents and are not therefore recommended for instruments or spillage if mycobacteria are likely to be present.

  3. Predicting urban stormwater runoff with quantitative precipitation estimates from commercial microwave links

    NASA Astrophysics Data System (ADS)

    Pastorek, Jaroslav; Fencl, Martin; Stránský, David; Rieckermann, Jörg; Bareš, Vojtěch

    2017-04-01

    Reliable and representative rainfall data are crucial for urban runoff modelling. However, traditional precipitation measurement devices often fail to provide sufficient information about the spatial variability of rainfall, especially when heavy storm events (determining design of urban stormwater systems) are considered. Commercial microwave links (CMLs), typically very dense in urban areas, allow for indirect precipitation detection with desired spatial and temporal resolution. Fencl et al. (2016) recognised the high bias in quantitative precipitation estimates (QPEs) from CMLs which significantly limits their usability and, in order to reduce the bias, suggested a novel method for adjusting the QPEs to existing rain gauge networks. Studies evaluating the potential of CMLs for rainfall detection so far focused primarily on direct comparison of the QPEs from CMLs to ground observations. In contrast, this investigation evaluates the suitability of these innovative rainfall data for stormwater runoff modelling on a case study of a small ungauged (in long-term perspective) urban catchment in Prague-Letňany, Czech Republic (Fencl et al., 2016). We compare the runoff measured at the outlet from the catchment with the outputs of a rainfall-runoff model operated using (i) CML data adjusted by distant rain gauges, (ii) rainfall data from the distant gauges alone and (iii) data from a single temporary rain gauge located directly in the catchment, as it is common practice in drainage engineering. Uncertainties of the simulated runoff are analysed using the Bayesian method for uncertainty evaluation incorporating a statistical bias description as formulated by Del Giudice et al. (2013). Our results show that adjusted CML data are able to yield reliable runoff modelling results, primarily for rainfall events with convective character. Performance statistics, most significantly the timing of maximal discharge, reach better (less uncertain) values with the adjusted CML data

  4. Improved quantitative precipitation estimation over complex terrain using cloud-to-ground lightning data

    NASA Astrophysics Data System (ADS)

    Minjarez-Sosa, Carlos Manuel

    Thunderstorms that occur in areas of complex terrain are a major severe weather hazard in the intermountain western U.S. Short-term quantitative estimation (QPE) of precipitation in complex terrain is a pressing need to better forecast flash flooding. Currently available techniques for QPE, that utilize a combination of rain gauge and weather radar information, may underestimate precipitation in areas where gauges do not exist or there is radar beam blockage. These are typically very mountainous and remote areas, that are quite vulnerable to flash flooding because of the steep topography. Lightning has been one of the novel ways suggested by the scientific community as an alternative to estimate precipitation over regions that experience convective precipitation, especially those continental areas with complex topography where the precipitation sensor measurements are scarce. This dissertation investigates the relationship between cloud-to-ground lightning and precipitation associated with convection with the purpose of estimating precipitation- mainly over areas of complex terrain which have precipitation sensor coverage problems (e.g. Southern Arizona). The results of this research are presented in two papers. The first, entitled Toward Development of Improved QPE in Complex Terrain Using Cloud-to-Ground Lighting Data: A case Study for the 2005 Monsoon in Southern Arizona, was published in the Journal of Hydrometeorology in December 2012. This initial study explores the relationship between cloud-to-ground lightning occurrences and multi-sensor gridded precipitation over southern Arizona. QPE is performed using a least squares approach for several time resolutions (seasonal---June, July and August---24 hourly and hourly) and for a 8 km grid size. The paper also presents problems that arise when the time resolution is increased, such as the spatial misplacing of discrete lightning events with gridded precipitation and the need to define a "diurnal day" that is

  5. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  6. Quantitative estimates of past changes in ITCZ position and cross-equatorial atmospheric heat transport

    NASA Astrophysics Data System (ADS)

    McGee, D.; Donohoe, A.; Marshall, J.; Ferreira, D.

    2012-12-01

    The mean position and seasonal migration of the Intertropical Convergence Zone (ITCZ) govern the intensity, spatial distribution and seasonality of precipitation throughout the tropics as well as the magnitude and direction of interhemispheric atmospheric heat transport (AHT). As a result of these links to global tropical precipitation and hemispheric heat budgets, paleoclimate studies have commonly sought to use reconstructions of local precipitation and surface winds to identify past shifts in the ITCZ's mean position or seasonal extent. Records indicate close ties between ITCZ position and interhemispheric surface temperature gradients in past climates, with the ITCZ shifting toward the warmer hemisphere. This shift would increase AHT into the cooler hemisphere to at least partially compensate for cooling there. Despite widespread qualitative evidence consistent with ITCZ shifts, few proxy records offer quantitative estimates of the distance of these shifts or of the associated changes in AHT. Here we present a strategy for placing quantitative limits on past changes in mean annual ITCZ position and interhemispheric AHT based on explorations of the modern seasonal cycle and models of present and past climates. We use reconstructions of tropical sea surface temperature gradients to place bounds on globally averaged ITCZ position and interhemispheric AHT during the Last Glacial Maximum, Heinrich Stadial 1, and the Mid-Holocene (6 ka). Though limited by the small number of SST records available, our results suggest that past shifts in the global mean ITCZ were small, typically less than 1 degree of latitude. Past changes in interhemispheric AHT may have been substantial, with anomalies approximately equal to the magnitude of modern interhemispheric AHT. Using constraints on the invariance of the total (ocean+atmosphere) heat transport we suggest possible bounds on fluctuations of the OHT and AMOC during Heinrich Stadial 1. We also explore ITCZ shifts in models and

  7. Using Modified Contour Deformable Model to Quantitatively Estimate Ultrasound Parameters for Osteoporosis Assessment

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Fu; Du, Yi-Chun; Tsai, Yi-Ting; Chen, Tainsong

    Osteoporosis is a systemic skeletal disease, which is characterized by low bone mass and micro-architectural deterioration of bone tissue, leading to bone fragility. Finding an effective method for prevention and early diagnosis of the disease is very important. Several parameters, including broadband ultrasound attenuation (BUA), speed of sound (SOS), and stiffness index (STI), have been used to measure the characteristics of bone tissues. In this paper, we proposed a method, namely modified contour deformable model (MCDM), bases on the active contour model (ACM) and active shape model (ASM) for automatically detecting the calcaneus contour from quantitative ultrasound (QUS) parametric images. The results show that the difference between the contours detected by the MCDM and the true boundary for the phantom is less than one pixel. By comparing the phantom ROIs, significant relationship was found between contour mean and bone mineral density (BMD) with R=0.99. The influence of selecting different ROI diameters (12, 14, 16 and 18 mm) and different region-selecting methods, including fixed region (ROI fix ), automatic circular region (ROI cir ) and calcaneal contour region (ROI anat ), were evaluated for testing human subjects. Measurements with large ROI diameters, especially using fixed region, result in high position errors (10-45%). The precision errors of the measured ultrasonic parameters for ROI anat are smaller than ROI fix and ROI cir . In conclusion, ROI anat provides more accurate measurement of ultrasonic parameters for the evaluation of osteoporosis and is useful for clinical application.

  8. Comparative Application of PLS and PCR Methods to Simultaneous Quantitative Estimation and Simultaneous Dissolution Test of Zidovudine - Lamivudine Tablets.

    PubMed

    Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli

    2015-01-01

    In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.

  9. MIRD pamphlet no. 16: Techniques for quantitative radiopharmaceutical biodistribution data acquisition and analysis for use in human radiation dose estimates.

    PubMed

    Siegel, J A; Thomas, S R; Stubbs, J B; Stabin, M G; Hays, M T; Koral, K F; Robertson, J S; Howell, R W; Wessels, B W; Fisher, D R; Weber, D A; Brill, A B

    1999-02-01

    This report describes recommended techniques for radiopharmaceutical biodistribution data acquisition and analysis in human subjects to estimate radiation absorbed dose using the Medical Internal Radiation Dose (MIRD) schema. The document has been prepared in a format to address two audiences: individuals with a primary interest in designing clinical trials who are not experts in dosimetry and individuals with extensive experience with dosimetry-based protocols and calculational methodology. For the first group, the general concepts involved in biodistribution data acquisition are presented, with guidance provided for the number of measurements (data points) required. For those with expertise in dosimetry, highlighted sections, examples and appendices have been included to provide calculational details, as well as references, for the techniques involved. This document is intended also to serve as a guide for the investigator in choosing the appropriate methodologies when acquiring and preparing product data for review by national regulatory agencies. The emphasis is on planar imaging techniques commonly available in most nuclear medicine departments and laboratories. The measurement of the biodistribution of radiopharmaceuticals is an important aspect in calculating absorbed dose from internally deposited radionuclides. Three phases are presented: data collection, data analysis and data processing. In the first phase, data collection, the identification of source regions, the determination of their appropriate temporal sampling and the acquisition of data are discussed. In the second phase, quantitative measurement techniques involving imaging by planar scintillation camera, SPECT and PET for the calculation of activity in source regions as a function of time are discussed. In addition, nonimaging measurement techniques, including external radiation monitoring, tissue-sample counting (blood and biopsy) and excreta counting are also considered. The third phase, data

  10. Quantitative analysis of wrist electrodermal activity during sleep

    PubMed Central

    Sano, Akane; Picard, Rosalind W.; Stickgold, Robert

    2015-01-01

    We present the first quantitative characterization of electrodermal activity (EDA) patterns on the wrists of healthy adults during sleep using dry electrodes. We compare the new results on the wrist to prior findings on palmar or finger EDA by characterizing data measured from 80 nights of sleep consisting of 9 nights of wrist and palm EDA from 9 healthy adults sleeping at home, 56 nights of wrist and palm EDA from one healthy adult sleeping at home, and 15 nights of wrist EDA from 15 healthy adults in a sleep laboratory, with the latter compared to concurrent polysomnography. While high frequency patterns of EDA called “storms” were identified by eye in the 1960’s, we systematically compare thresholds for automatically detecting EDA peaks and establish criteria for EDA storms. We found that more than 80% of EDA peaks occurred in non-REM sleep, specifically during slow-wave sleep (SWS) and non-REM stage 2 sleep (NREM2). Also, EDA amplitude is higher in SWS than in other sleep stages. Longer EDA storms were more likely in the first two quarters of sleep and during SWS and NREM2. We also found from the home studies (65 nights) that EDA levels were higher and the skin conductance peaks were larger and more frequent when measured on the wrist than when measured on the palm. These EDA high frequency peaks and high amplitude were sometimes associated with higher skin temperature, but more work is needed looking at neurological and other EDA elicitors in order to elucidate their complete behavior. PMID:25286449

  11. Quantitative analysis of wrist electrodermal activity during sleep.

    PubMed

    Sano, Akane; Picard, Rosalind W; Stickgold, Robert

    2014-12-01

    We present the first quantitative characterization of electrodermal activity (EDA) patterns on the wrists of healthy adults during sleep using dry electrodes. We compare the new results on the wrist to the prior findings on palmar or finger EDA by characterizing data measured from 80 nights of sleep consisting of 9 nights of wrist and palm EDA from 9 healthy adults sleeping at home, 56 nights of wrist and palm EDA from one healthy adult sleeping at home, and 15 nights of wrist EDA from 15 healthy adults in a sleep laboratory, with the latter compared to concurrent polysomnography. While high frequency patterns of EDA called "storms" were identified by eye in the 1960s, we systematically compare thresholds for automatically detecting EDA peaks and establish criteria for EDA storms. We found that more than 80% of the EDA peaks occurred in non-REM sleep, specifically during slow-wave sleep (SWS) and non-REM stage 2 sleep (NREM2). Also, EDA amplitude is higher in SWS than in other sleep stages. Longer EDA storms were more likely to occur in the first two quarters of sleep and during SWS and NREM2. We also found from the home studies (65 nights) that EDA levels were higher and the skin conductance peaks were larger and more frequent when measured on the wrist than when measured on the palm. These EDA high frequency peaks and high amplitude were sometimes associated with higher skin temperature, but more work is needed looking at neurological and other EDA elicitors in order to elucidate their complete behavior.

  12. Rapid and Quantitative Assay of Amyloid-Seeding Activity in Human Brains Affected with Prion Diseases

    PubMed Central

    Takatsuki, Hanae; Satoh, Katsuya; Sano, Kazunori; Fuse, Takayuki; Nakagaki, Takehiro; Mori, Tsuyoshi; Ishibashi, Daisuke; Mihara, Ban; Takao, Masaki; Iwasaki, Yasushi; Yoshida, Mari; Atarashi, Ryuichiro; Nishida, Noriyuki

    2015-01-01

    The infectious agents of the transmissible spongiform encephalopathies are composed of amyloidogenic prion protein, PrPSc. Real-time quaking-induced conversion can amplify very small amounts of PrPSc seeds in tissues/body fluids of patients or animals. Using this in vitro PrP-amyloid amplification assay, we quantitated the seeding activity of affected human brains. End-point assay using serially diluted brain homogenates of sporadic Creutzfeldt–Jakob disease patients demonstrated that 50% seeding dose (SD50) is reached approximately 1010/g brain (values varies 108.79–10.63/g). A genetic case (GSS-P102L) yielded a similar level of seeding activity in an autopsy brain sample. The range of PrPSc concentrations in the samples, determined by dot-blot assay, was 0.6–5.4 μg/g brain; therefore, we estimated that 1 SD50 unit was equivalent to 0.06–0.27 fg of PrPSc. The SD50 values of the affected brains dropped more than three orders of magnitude after autoclaving at 121°C. This new method for quantitation of human prion activity provides a new way to reduce the risk of iatrogenic prion transmission. PMID:26070208

  13. Quantitative precipitation estimation in complex orography using quasi-vertical profiles of dual polarization radar variables

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca

    2017-04-01

    Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface

  14. Revisiting borehole strain, typhoons, and slow earthquakes using quantitative estimates of precipitation-induced strain changes

    NASA Astrophysics Data System (ADS)

    Hsu, Ya-Ju; Chang, Yuan-Shu; Liu, Chi-Ching; Lee, Hsin-Ming; Linde, Alan T.; Sacks, Selwyn I.; Kitagawa, Genshio; Chen, Yue-Gau

    2015-06-01

    Taiwan experiences high deformation rates, particularly along its eastern margin where a shortening rate of about 30 mm/yr is experienced in the Longitudinal Valley and the Coastal Range. Four Sacks-Evertson borehole strainmeters have been installed in this area since 2003. Liu et al. (2009) proposed that a number of strain transient events, primarily coincident with low-barometric pressure during passages of typhoons, were due to deep-triggered slow slip. Here we extend that investigation with a quantitative analysis of the strain responses to precipitation as well as barometric pressure and the Earth tides in order to isolate tectonic source effects. Estimates of the strain responses to barometric pressure and groundwater level changes for the different stations vary over the ranges -1 to -3 nanostrain/millibar(hPa) and -0.3 to -1.0 nanostrain/hPa, respectively, consistent with theoretical values derived using Hooke's law. Liu et al. (2009) noted that during some typhoons, including at least one with very heavy rainfall, the observed strain changes were consistent with only barometric forcing. By considering a more extensive data set, we now find that the strain response to rainfall is about -5.1 nanostrain/hPa. A larger strain response to rainfall compared to that to air pressure and water level may be associated with an additional strain from fluid pressure changes that take place due to infiltration of precipitation. Using a state-space model, we remove the strain response to rainfall, in addition to those due to air pressure changes and the Earth tides, and investigate whether corrected strain changes are related to environmental disturbances or tectonic-original motions. The majority of strain changes attributed to slow earthquakes seem rather to be associated with environmental factors. However, some events show remaining strain changes after all corrections. These events include strain polarity changes during passages of typhoons (a characteristic that is

  15. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    PubMed

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  16. Quantitative shape analysis with weighted covariance estimates for increased statistical efficiency

    PubMed Central

    2013-01-01

    Background The introduction and statistical formalisation of landmark-based methods for analysing biological shape has made a major impact on comparative morphometric analyses. However, a satisfactory solution for including information from 2D/3D shapes represented by ‘semi-landmarks’ alongside well-defined landmarks into the analyses is still missing. Also, there has not been an integration of a statistical treatment of measurement error in the current approaches. Results We propose a procedure based upon the description of landmarks with measurement covariance, which extends statistical linear modelling processes to semi-landmarks for further analysis. Our formulation is based upon a self consistent approach to the construction of likelihood-based parameter estimation and includes corrections for parameter bias, induced by the degrees of freedom within the linear model. The method has been implemented and tested on measurements from 2D fly wing, 2D mouse mandible and 3D mouse skull data. We use these data to explore possible advantages and disadvantages over the use of standard Procrustes/PCA analysis via a combination of Monte-Carlo studies and quantitative statistical tests. In the process we show how appropriate weighting provides not only greater stability but also more efficient use of the available landmark data. The set of new landmarks generated in our procedure (‘ghost points’) can then be used in any further downstream statistical analysis. Conclusions Our approach provides a consistent way of including different forms of landmarks into an analysis and reduces instabilities due to poorly defined points. Our results suggest that the method has the potential to be utilised for the analysis of 2D/3D data, and in particular, for the inclusion of information from surfaces represented by multiple landmark points. PMID:23548043

  17. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    NASA Astrophysics Data System (ADS)

    Song, N.; He, B.; Wahl, R. L.; Frey, E. C.

    2011-09-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  18. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections.

    PubMed

    Song, N; He, B; Wahl, R L; Frey, E C

    2011-09-07

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  19. Tracking of EEG activity using motion estimation to understand brain wiring.

    PubMed

    Nisar, Humaira; Malik, Aamir Saeed; Ullah, Rafi; Shim, Seong-O; Bawakid, Abdullah; Khan, Muhammad Burhan; Subhani, Ahmad Rauf

    2015-01-01

    The fundamental step in brain research deals with recording electroencephalogram (EEG) signals and then investigating the recorded signals quantitatively. Topographic EEG (visual spatial representation of EEG signal) is commonly referred to as brain topomaps or brain EEG maps. In this chapter, full search full search block motion estimation algorithm has been employed to track the brain activity in brain topomaps to understand the mechanism of brain wiring. The behavior of EEG topomaps is examined throughout a particular brain activation with respect to time. Motion vectors are used to track the brain activation over the scalp during the activation period. Using motion estimation it is possible to track the path from the starting point of activation to the final point of activation. Thus it is possible to track the path of a signal across various lobes.

  20. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  1. Quantitative estimation of pulegone in Mentha longifolia growing in Saudi Arabia. Is it safe to use?

    PubMed

    Alam, Prawez; Saleh, Mahmoud Fayez; Abdel-Kader, Maged Saad

    2016-03-01

    Our TLC study of the volatile oil isolated from Mentha longifolia showed a major UV active spot with higher Rf value than menthol. Based on the fact that the components of the oil from same plant differ quantitatively due to environmental conditions, the major spot was isolated using different chromatographic techniques and identified by spectroscopic means as pulegone. The presence of pulegone in M. longifolia, a plant widely used in Saudi Arabia, raised a hot debate due to its known toxicity. The Scientific Committee on Food, Health & Consumer Protection Directorate General, European Commission set a limit for the presence of pulegone in foodstuffs and beverages. In this paper we attempted to determine the exact amount of pulegone in different extracts, volatile oil as well as tea flavoured with M. longifolia (Habak) by densitometric HPTLC validated methods using normal phase (Method I) and reverse phase (Method II) TLC plates. The study indicated that the style of use of Habak in Saudi Arabia resulted in much less amount of pulegone than the allowed limit.

  2. Estimating phytoplankton photosynthesis by active fluorescence

    SciTech Connect

    Falkowski, P.G.; Kolber, Z.

    1992-01-01

    Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.

  3. Estimating phytoplankton photosynthesis by active fluorescence

    SciTech Connect

    Falkowski, P.G.; Kolber, Z.

    1992-10-01

    Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.

  4. Automatic activity estimation based on object behaviour signature

    NASA Astrophysics Data System (ADS)

    Martínez-Pérez, F. E.; González-Fraga, J. A.; Tentori, M.

    2010-08-01

    Automatic estimation of human activities is a topic widely studied. However the process becomes difficult when we want to estimate activities from a video stream, because human activities are dynamic and complex. Furthermore, we have to take into account the amount of information that images provide, since it makes the modelling and estimation activities a hard work. In this paper we propose a method for activity estimation based on object behavior. Objects are located in a delimited observation area and their handling is recorded with a video camera. Activity estimation can be done automatically by analyzing the video sequences. The proposed method is called "signature recognition" because it considers a space-time signature of the behaviour of objects that are used in particular activities (e.g. patients' care in a healthcare environment for elder people with restricted mobility). A pulse is produced when an object appears in or disappears of the observation area. This means there is a change from zero to one or vice versa. These changes are produced by the identification of the objects with a bank of nonlinear correlation filters. Each object is processed independently and produces its own pulses; hence we are able to recognize several objects with different patterns at the same time. The method is applied to estimate three healthcare-related activities of elder people with restricted mobility.

  5. Inter-rater reliability of motor unit number estimates and quantitative motor unit analysis in the tibialis anterior muscle.

    PubMed

    Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L

    2009-05-01

    To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.

  6. Quantitative structure-activity relationships for the in vitro antimycobacterial activity of pyrazinoic acid esters.

    PubMed

    Bergmann, K E; Cynamon, M H; Welch, J T

    1996-08-16

    Substituted pyrazinoic acid esters have previously been reported to have in vitro activity against Mycobacterium avium and Mycobacterium kansasii as well as Mycobacterium tuberculosis. Modification of both the pyrazine nucleus and the ester functionality was successful in expanding the antimycobacterial activity associated with pyrazinamide to include M. avium and M. kansasii, organisms usually not susceptible to pyrazinamide. In an attempt to understand the relationship between the activity of the esters with the needed biostability, a quantitative structure-activity relationship has been developed. This derived relationship is consistent with the observation that tert-butyl 5-chloropyrazinoate (13) and 2'-(2'-methyldecyl) 5-chloropyrazinoate (25), compounds which are both 100-fold more active than pyrazinamide against M. tuberculosis and possess a serum stability 900-1000 times greater than the lead compounds in the series.

  7. Be the Volume: A Classroom Activity to Visualize Volume Estimation

    ERIC Educational Resources Information Center

    Mikhaylov, Jessica

    2011-01-01

    A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…

  8. Be the Volume: A Classroom Activity to Visualize Volume Estimation

    ERIC Educational Resources Information Center

    Mikhaylov, Jessica

    2011-01-01

    A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…

  9. The impacts of climatological adjustment of quantitative precipitation estimates on the accuracy of flash flood detection

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Reed, Sean; Gourley, Jonathan J.; Cosgrove, Brian; Kitzmiller, David; Seo, Dong-Jun; Cifelli, Robert

    2016-10-01

    The multisensor Quantitative Precipitation Estimates (MQPEs) created by the US National Weather Service (NWS) are subject to a non-stationary bias. This paper quantifies the impacts of climatological adjustment of MQPEs alone, as well as the compound impacts of adjustment and model calibration, on the accuracy of simulated flood peak magnitude and that in detecting flood events. Our investigation is based on 19 watersheds in the mid-Atlantic region of US, which are grouped into small (<500 km2) and large (>500 km2) watersheds. NWS archival MQPEs over 1997-2013 for this region are adjusted to match concurrent gauge-based monthly precipitation accumulations. Then raw and adjusted MQPEs serve as inputs to the NWS distributed hydrologic model-threshold frequency framework (DHM-TF). Two experiments via DHM-TF are performed. The first one examines the impacts of adjustment alone through uncalibrated model simulations, whereas the second one focuses on the compound effects of adjustment and calibration on the detection of flood events. Uncalibrated model simulations show broad underestimation of flood peaks for small watersheds and overestimation those for large watersheds. Prior to calibration, adjustment alone tends to reduce the magnitude of simulated flood peaks for small and large basins alike, with 95% of all watersheds experienced decline over 2004-2013. A consequence is that a majority of small watersheds experience no improvement, or deterioration in bias (0% of basins experiencing improvement). By contrast, most (73%) of larger ones exhibit improved bias. Outcomes of the detection experiment show that the role of adjustment is not diminished by calibration for small watersheds, with only 25% of which exhibiting reduced bias after adjustment with calibrated parameters. Furthermore, it is shown that calibration is relatively effective in reducing false alarms (e.g., false alarm rate is down from 0.28 to 0.19 after calibration for small watersheds with calibrated

  10. Assimilation of radar quantitative precipitation estimations in the Canadian Precipitation Analysis (CaPA)

    NASA Astrophysics Data System (ADS)

    Fortin, Vincent; Roy, Guy; Donaldson, Norman; Mahidjiba, Ahmed

    2015-12-01

    The Canadian Precipitation Analysis (CaPA) is a data analysis system used operationally at the Canadian Meteorological Center (CMC) since April 2011 to produce gridded 6-h and 24-h precipitation accumulations in near real-time on a regular grid covering all of North America. The current resolution of the product is 10-km. Due to the low density of the observational network in most of Canada, the system relies on a background field provided by the Regional Deterministic Prediction System (RDPS) of Environment Canada, which is a short-term weather forecasting system for North America. For this reason, the North American configuration of CaPA is known as the Regional Deterministic Precipitation Analysis (RDPA). Early in the development of the CaPA system, weather radar reflectivity was identified as a very promising additional data source for the precipitation analysis, but necessary quality control procedures and bias-correction algorithms were lacking for the radar data. After three years of development and testing, a new version of CaPA-RDPA system was implemented in November 2014 at CMC. This version is able to assimilate radar quantitative precipitation estimates (QPEs) from all 31 operational Canadian weather radars. The radar QPE is used as an observation source and not as a background field, and is subject to a strict quality control procedure, like any other observation source. The November 2014 upgrade to CaPA-RDPA was implemented at the same time as an upgrade to the RDPS system, which brought minor changes to the skill and bias of CaPA-RDPA. This paper uses the frequency bias indicator (FBI), the equitable threat score (ETS) and the departure from the partial mean (DPM) in order to assess the improvements to CaPA-RDPA brought by the assimilation of radar QPE. Verification focuses on the 6-h accumulations, and is done against a network of 65 synoptic stations (approximately two stations per radar) that were withheld from the station data assimilated by Ca

  11. Recovering the primary geochemistry of Jack Hills zircons through quantitative estimates of chemical alteration

    NASA Astrophysics Data System (ADS)

    Bell, Elizabeth A.; Boehnke, Patrick; Harrison, T. Mark

    2016-10-01

    Despite the robust nature of zircon in most crustal and surface environments, chemical alteration, especially associated with radiation damaged regions, can affect its geochemistry. This consideration is especially important when drawing inferences from the detrital record where the original rock context is missing. Typically, alteration is qualitatively diagnosed through inspection of zircon REE patterns and the style of zoning shown by cathodoluminescence imaging, since fluid-mediated alteration often causes a flat, high LREE pattern. Due to the much lower abundance of LREE in zircon relative both to other crustal materials and to the other REE, disturbance to the LREE pattern is the most likely first sign of disruption to zircon trace element contents. Using a database of 378 (148 new) trace element and 801 (201 new) oxygen isotope measurements on zircons from Jack Hills, Western Australia, we propose a quantitative framework for assessing chemical contamination and exchange with fluids in this population. The Light Rare Earth Element Index is scaled on the relative abundance of light to middle REE, or LREE-I = (Dy/Nd) + (Dy/Sm). LREE-I values vary systematically with other known contaminants (e.g., Fe, P) more faithfully than other suggested proxies for zircon alteration (Sm/La, various absolute concentrations of LREEs) and can be used to distinguish primary compositions when textural evidence for alteration is ambiguous. We find that zircon oxygen isotopes do not vary systematically with placement on or off cracks or with degree of LREE-related chemical alteration, suggesting an essentially primary signature. By omitting zircons affected by LREE-related alteration or contamination by mineral inclusions, we present the best estimate for the primary igneous geochemistry of the Jack Hills zircons. This approach increases the available dataset by allowing for discrimination of on-crack analyses (and analyses with ambiguous or no information on spot placement or

  12. Recapturing Quantitative Biology.

    ERIC Educational Resources Information Center

    Pernezny, Ken; And Others

    1996-01-01

    Presents a classroom activity on estimating animal populations. Uses shoe boxes and candies to emphasize the importance of mathematics in biology while introducing the methods of quantitative ecology. (JRH)

  13. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is

  14. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    PubMed

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  15. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006

    PubMed Central

    Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.

    2016-01-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density

  16. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation

  17. Report: quantitative estimation of beta-sitosterol, lupeol, quercetin and quercetin glycosides from leaflets of Soymida febrifuga using HPTLC technique.

    PubMed

    Attarde, D L; Aurangabadkar, V M; Belsare, D P; Pal, S C

    2008-07-01

    Soymida febrifuga (Meliaceae) dried leaflets (10 gm) were extracted with petroleum ether. Unsaponifiable matter quantitatively used for sample preparation, labeled as SF-U. Another 10 gm leaflet powder was extracted with methanol and quantitatively used for sample preparation labeled as SF-A. Sample and standard solution were dosage on three different plates and developed in its respective mobile phase plates were scanned using TLC scanner III and estimated using integration software CATs 4.05. Calculations for percentage were done considering standard and sample R(f), AUC and dilution factor. Estimation of beta Sitosterol, Lupeol, Quercetin, Quercetin-3-O-galactoside, Quercetin-3-O-xyloside and Quercetin-3-O-rutinoside were determined as 0.02146% w/w, 0.0377% w/w, 0.4079% w/w, 0.6197% w/w, 2.974% w/w and 3.235% w/w respectively with the help of HPLC techniques.

  18. Estimation of Coefficients of Individual Agreement (CIA’s) for Quantitative and Binary Data using SAS and R

    PubMed Central

    Pan, Yi; Gao, Jingjing; Haber, Michael; Barnhart, Huiman X.

    2010-01-01

    The coefficients of individual agreement (CIA’s), which are based on the ratio of the intra-and inter-observer disagreement, provide a general approach for evaluating agreement between two fixed methods of measurements or human observers. In this paper, programs in both SAS and R are presented for estimation of the CIA’s between two observers with quantitative or binary measurements. A detailed illustration of the computations, macro variable definitions, input and output for the SAS and R programs are also included in the text. The programs provide estimations of CIA’s, their standard errors as well as confidence intervals, for the cases with or without a reference method. Data from a carotid stenosis screening study is used as an example of quantitative measurements. Data from a study involving the evaluation of mammograms by ten radiologists is used to illustrate a binary data example. PMID:20079947

  19. NEXRAD quantitative precipitation estimates, data acquisition, and processing for the DuPage County, Illinois, streamflow-simulation modeling system

    USGS Publications Warehouse

    Ortel, Terry W.; Spies, Ryan R.

    2015-11-19

    Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).

  20. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    PubMed

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  1. Quantitative Estimation of Risks for Production Unit Based on OSHMS and Process Resilience

    NASA Astrophysics Data System (ADS)

    Nyambayar, D.; Koshijima, I.; Eguchi, H.

    2017-06-01

    Three principal elements in the production field of chemical/petrochemical industry are (i) Production Units, (ii) Production Plant Personnel and (iii) Production Support System (computer system introduced for improving productivity). Each principal element has production process resilience, i.e. a capability to restrain disruptive signals occurred in and out of the production field. In each principal element, risk assessment is indispensable for the production field. In a production facility, the occupational safety and health management system (Hereafter, referred to as OSHMS) has been introduced to reduce a risk of accidents and troubles that may occur during production. In OSHMS, a risk assessment is specified to reduce a potential risk in the production facility such as a factory, and PDCA activities are required for a continual improvement of safety production environments. However, there is no clear statement to adopt the OSHMS standard into the production field. This study introduces a metric to estimate the resilience of the production field by using the resilience generated by the production plant personnel and the result of the risk assessment in the production field. A method for evaluating how OSHMS functions are systematically installed in the production field is also discussed based on the resilience of the three principal elements.

  2. Estimating distributions out of qualitative and (semi)quantitative microbiological contamination data for use in risk assessment.

    PubMed

    Busschaert, P; Geeraerd, A H; Uyttendaele, M; Van Impe, J F

    2010-04-15

    A framework using maximum likelihood estimation (MLE) is used to fit a probability distribution to a set of qualitative (e.g., absence in 25 g), semi-quantitative (e.g., presence in 25 g and absence in 1g) and/or quantitative test results (e.g., 10 CFU/g). Uncertainty about the parameters of the variability distribution is characterized through a non-parametric bootstrapping method. The resulting distribution function can be used as an input for a second order Monte Carlo simulation in quantitative risk assessment. As an illustration, the method is applied to two sets of in silico generated data. It is demonstrated that correct interpretation of data results in an accurate representation of the contamination level distribution. Subsequently, two case studies are analyzed, namely (i) quantitative analyses of Campylobacter spp. in food samples with nondetects, and (ii) combined quantitative, qualitative, semiquantitative analyses and nondetects of Listeria monocytogenes in smoked fish samples. The first of these case studies is also used to illustrate what the influence is of the limit of quantification, measurement error, and the number of samples included in the data set. Application of these techniques offers a way for meta-analysis of the many relevant yet diverse data sets that are available in literature and (inter)national reports of surveillance or baseline surveys, therefore increases the information input of a risk assessment and, by consequence, the correctness of the outcome of the risk assessment.

  3. Reproducibility of CSF quantitative culture methods for estimating rate of clearance in cryptococcal meningitis.

    PubMed

    Dyal, Jonathan; Akampurira, Andrew; Rhein, Joshua; Morawski, Bozena M; Kiggundu, Reuben; Nabeta, Henry W; Musubire, Abdu K; Bahr, Nathan C; Williams, Darlisha A; Bicanic, Tihana; Larsen, Robert A; Meya, David B; Boulware, David R

    2016-05-01

    Quantitative cerebrospinal fluid (CSF) cultures provide a measure of disease severity in cryptococcal meningitis. The fungal clearance rate by quantitative cultures has become a primary endpoint for phase II clinical trials. This study determined the inter-assay accuracy of three different quantitative culture methodologies. Among 91 participants with meningitis symptoms in Kampala, Uganda, during August-November 2013, 305 CSF samples were prospectively collected from patients at multiple time points during treatment. Samples were simultaneously cultured by three methods: (1) St. George's 100 mcl input volume of CSF with five 1:10 serial dilutions, (2) AIDS Clinical Trials Group (ACTG) method using 1000, 100, 10 mcl input volumes, and two 1:100 dilutions with 100 and 10 mcl input volume per dilution on seven agar plates; and (3) 10 mcl calibrated loop of undiluted and 1:100 diluted CSF (loop). Quantitative culture values did not statistically differ between St. George-ACTG methods (P= .09) but did for St. George-10 mcl loop (P< .001). Repeated measures pairwise correlation between any of the methods was high (r≥0.88). For detecting sterility, the ACTG-method had the highest negative predictive value of 97% (91% St. George, 60% loop), but the ACTG-method had occasional (∼10%) difficulties in quantification due to colony clumping. For CSF clearance rate, St. George-ACTG methods did not differ overall (mean -0.05 ± 0.07 log10CFU/ml/day;P= .14) on a group level; however, individual-level clearance varied. The St. George and ACTG quantitative CSF culture methods produced comparable but not identical results. Quantitative cultures can inform treatment management strategies.

  4. Quantitative evaluation of hidden defects in cast iron components using ultrasound activated lock-in vibrothermography

    SciTech Connect

    Montanini, R.; Freni, F.; Rossi, G. L.

    2012-09-15

    This paper reports one of the first experimental results on the application of ultrasound activated lock-in vibrothermography for quantitative assessment of buried flaws in complex cast parts. The use of amplitude modulated ultrasonic heat generation allowed selective response of defective areas within the part, as the defect itself is turned into a local thermal wave emitter. Quantitative evaluation of hidden damages was accomplished by estimating independently both the area and the depth extension of the buried flaws, while x-ray 3D computed tomography was used as reference for sizing accuracy assessment. To retrieve flaw's area, a simple yet effective histogram-based phase image segmentation algorithm with automatic pixels classification has been developed. A clear correlation was found between the thermal (phase) signature measured by the infrared camera on the target surface and the actual mean cross-section area of the flaw. Due to the very fast cycle time (<30 s/part), the method could potentially be applied for 100% quality control of casting components.

  5. Teratogenic potency of valproate analogues evaluated by quantitative estimation of cellular morphology in vitro.

    PubMed

    Berezin, V; Kawa, A; Bojic, U; Foley, A; Nau, H; Regan, C; Edvardsen, K; Bock, E

    1996-10-01

    To develop a simple prescreening system for teratogenicity testing, a novel in vitro assay was established using computer assisted microscopy allowing automatic delineation of contours of stained cells and thereby quantitative determination of cellular morphology. The effects of valproic acid (VPA) and analogues with high as well as low teratogenic activities-(as previously determined in vivo)-were used as probes for study of the discrimination power of the in vitro model. VPA, a teratogenic analogue (+/-)-4-en-VPA, and a non-teratogenic analogue (E)-2-en-VPA, as well as the purified (S)- and (R)-enantiomers of 4-yn-VPA (teratogenic and non-teratogenic, respectively), were tested for their effects on cellular morphology of cloned mouse fibroblastoid L-cell lines, neuroblastoma N2a cells, and rat glioma BT4Cn cells, and were found to induce varying increases in cellular area: Furthermore, it was demonstrated that under the chosen conditions the increase in area correlated statistically significantly with the teratogenic potency of the employed compounds. Setting the cellular area of mouse L-cells to 100% under control conditions, the most pronounced effect was observed for (S)-4-yn-VPA (211%, P = < 0.001) followed by VPA (186%, P < 0.001), 4-en-VPA (169%, P < 0.001) and non-teratogenic 2-en-VPA (137%, P < 0.005) and (R)-4-yn-VPA (105%). This effect was independent of the choice of substrata, since it was observed on L-cells grown on plastic, fibronectin, laminin and Matrigel. However, when VPA-treated cells were exposed to an arginyl-glycyl-aspartate (RGD)-containing peptide to test whether VPA treatment was able to modulate RGD-dependent integrin interactions with components of the extracellular matrix, hardly any effect could be observed, whereas control cells readily detached from the substratum, indicating a changed substrate adhesion of the VPA-treated cells. The data thus indicate that measurement of cellular area may serve as a simple in vitro test in the

  6. Comparison of Accelerometry Methods for Estimating Physical Activity.

    PubMed

    Kerr, Jacqueline; Marinac, Catherine R; Ellis, Katherine; Godbole, Suneeta; Hipp, Aaron; Glanz, Karen; Mitchell, Jonathan; Laden, Francine; James, Peter; Berrigan, David

    2017-03-01

    This study aimed to compare physical activity estimates across different accelerometer wear locations, wear time protocols, and data processing techniques. A convenience sample of middle-age to older women wore a GT3X+ accelerometer at the wrist and hip for 7 d. Physical activity estimates were calculated using three data processing techniques: single-axis cut points, raw vector magnitude thresholds, and machine learning algorithms applied to the raw data from the three axes. Daily estimates were compared for the 321 women using generalized estimating equations. A total of 1420 d were analyzed. Compliance rates for the hip versus wrist location only varied by 2.7%. All differences between techniques, wear locations, and wear time protocols were statistically different (P < 0.05). Mean minutes per day in physical activity varied from 22 to 67 depending on location and method. On the hip, the 1952-count cut point found at least 150 min·wk of physical activity in 22% of participants, raw vector magnitude found 32%, and the machine-learned algorithm found 74% of participants with 150 min of walking/running per week. The wrist algorithms found 59% and 60% of participants with 150 min of physical activity per week using the raw vector magnitude and machine-learned techniques, respectively. When the wrist device was worn overnight, up to 4% more participants met guidelines. Estimates varied by 52% across techniques and by as much as 41% across wear locations. Findings suggest that researchers should be cautious when comparing physical activity estimates from different studies. Efforts to standardize accelerometry-based estimates of physical activity are needed. A first step might be to report on multiple procedures until a consensus is achieved.

  7. Validation and Estimation of Additive Genetic Variation Associated with DNA Tests for Quantitative Beef Cattle Traits

    USDA-ARS?s Scientific Manuscript database

    The U.S. National Beef Cattle Evaluation Consortium (NBCEC) has been involved in the validation of commercial DNA tests for quantitative beef quality traits since their first appearance on the U.S. market in the early 2000s. The NBCEC Advisory Council initially requested that the NBCEC set up a syst...

  8. The Overall Impact of Testing on Medical Student Learning: Quantitative Estimation of Consequential Validity

    ERIC Educational Resources Information Center

    Kreiter, Clarence D.; Green, Joseph; Lenoch, Susan; Saiki, Takuya

    2013-01-01

    Given medical education's longstanding emphasis on assessment, it seems prudent to evaluate whether our current research and development focus on testing makes sense. Since any intervention within medical education must ultimately be evaluated based upon its impact on student learning, this report seeks to provide a quantitative accounting of…

  9. Estimating ROI activity concentration with photon-processing and photon-counting SPECT imaging systems

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Recently a new class of imaging systems, referred to as photon-processing (PP) systems, are being developed that uses real-time maximum-likelihood (ML) methods to estimate multiple attributes per detected photon and store these attributes in a list format. PP systems could have a number of potential advantages compared to systems that bin photons based on attributes such as energy, projection angle, and position, referred to as photon-counting (PC) systems. For example, PP systems do not suffer from binning-related information loss and provide the potential to extract information from attributes such as energy deposited by the detected photon. To quantify the effects of this advantage on task performance, objective evaluation studies are required. We performed this study in the context of quantitative 2-dimensional single-photon emission computed tomography (SPECT) imaging with the end task of estimating the mean activity concentration within a region of interest (ROI). We first theoretically outline the effect of null space on estimating the mean activity concentration, and argue that due to this effect, PP systems could have better estimation performance compared to PC systems with noise-free data. To evaluate the performance of PP and PC systems with noisy data, we developed a singular value decomposition (SVD)-based analytic method to estimate the activity concentration from PP systems. Using simulations, we studied the accuracy and precision of this technique in estimating the activity concentration. We used this framework to objectively compare PP and PC systems on the activity concentration estimation task. We investigated the effects of varying the size of the ROI and varying the number of bins for the attribute corresponding to the angular orientation of the detector in a continuously rotating SPECT system. The results indicate that in several cases, PP systems offer improved estimation performance compared to PC systems.

  10. Using Active Learning to Teach Concepts and Methods in Quantitative Biology.

    PubMed

    Waldrop, Lindsay D; Adolph, Stephen C; Diniz Behn, Cecilia G; Braley, Emily; Drew, Joshua A; Full, Robert J; Gross, Louis J; Jungck, John A; Kohler, Brynja; Prairie, Jennifer C; Shtylla, Blerta; Miller, Laura A

    2015-11-01

    This article provides a summary of the ideas discussed at the 2015 Annual Meeting of the Society for Integrative and Comparative Biology society-wide symposium on Leading Students and Faculty to Quantitative Biology through Active Learning. It also includes a brief review of the recent advancements in incorporating active learning approaches into quantitative biology classrooms. We begin with an overview of recent literature that shows that active learning can improve students' outcomes in Science, Technology, Engineering and Math Education disciplines. We then discuss how this approach can be particularly useful when teaching topics in quantitative biology. Next, we describe some of the recent initiatives to develop hands-on activities in quantitative biology at both the graduate and the undergraduate levels. Throughout the article we provide resources for educators who wish to integrate active learning and technology into their classrooms.

  11. A Bayesian quantitative nondestructive evaluation (QNDE) approach to estimating remaining life of aging pressure vessels and piping*

    NASA Astrophysics Data System (ADS)

    Fong, J. T.; Filliben, J. J.; Heckert, N. A.; Guthrie, W. F.

    2013-01-01

    In this paper, we use a Bayesian quantitative nondestructive evaluation (QNDE) approach to estimating the remaining life of aging structures and components. Our approach depends on in-situ NDE measurements of detectable crack lengths and crack growth rates in a multi-crack region of an aging component as a basis for estimating the mean and standard deviation of its remaining life. We introduce a general theory of crack growth involving multiple cracks such that the mean and standard deviation of the initial crack lengths can be directly estimated from NDEmeasured crack length data over a period of several inspection intervals. A numerical example using synthetic NDE data for high strength steels is presented to illustrate this new methodology.

  12. Quantitative Structure Activity Relationship Models for the Antioxidant Activity of Polysaccharides

    PubMed Central

    Nie, Kaiying; Wang, Zhaojing

    2016-01-01

    In this study, quantitative structure activity relationship (QSAR) models for the antioxidant activity of polysaccharides were developed with 50% effective concentration (EC50) as the dependent variable. To establish optimum QSAR models, multiple linear regressions (MLR), support vector machines (SVM) and artificial neural networks (ANN) were used, and 11 molecular descriptors were selected. The optimum QSAR model for predicting EC50 of DPPH-scavenging activity consisted of four major descriptors. MLR model gave EC50 = 0.033Ara-0.041GalA-0.03GlcA-0.025PC+0.484, and MLR fitted the training set with R = 0.807. ANN model gave the improvement of training set (R = 0.96, RMSE = 0.018) and test set (R = 0.933, RMSE = 0.055) which indicated that it was more accurately than SVM and MLR models for predicting the DPPH-scavenging activity of polysaccharides. 67 compounds were used for predicting EC50 of the hydroxyl radicals scavenging activity of polysaccharides. MLR model gave EC50 = 0.12PC+0.083Fuc+0.013Rha-0.02UA+0.372. A comparison of results from models indicated that ANN model (R = 0.944, RMSE = 0.119) was also the best one for predicting the hydroxyl radicals scavenging activity of polysaccharides. MLR and ANN models showed that Ara and GalA appeared critical in determining EC50 of DPPH-scavenging activity, and Fuc, Rha, uronic acid and protein content had a great effect on the hydroxyl radicals scavenging activity of polysaccharides. The antioxidant activity of polysaccharide usually was high in MW range of 4000–100000, and the antioxidant activity could be affected simultaneously by other polysaccharide properties, such as uronic acid and Ara. PMID:27685320

  13. Antitumor activity of 3,4-ethylenedioxythiophene derivatives and quantitative structure-activity relationship analysis

    NASA Astrophysics Data System (ADS)

    Jukić, Marijana; Rastija, Vesna; Opačak-Bernardi, Teuta; Stolić, Ivana; Krstulović, Luka; Bajić, Miroslav; Glavaš-Obrovac, Ljubica

    2017-04-01

    The aim of this study was to evaluate nine newly synthesized amidine derivatives of 3,4- ethylenedioxythiophene (3,4-EDOT) for their cytotoxic activity against a panel of human cancer cell lines and to perform a quantitative structure-activity relationship (QSAR) analysis for the antitumor activity of a total of 27 3,4-ethylenedioxythiophene derivatives. Induction of apoptosis was investigated on the selected compounds, along with delivery options for the optimization of activity. The best obtained QSAR models include the following group of descriptors: BCUT, WHIM, 2D autocorrelations, 3D-MoRSE, GETAWAY descriptors, 2D frequency fingerprint and information indices. Obtained QSAR models should be relieved in elucidation of important physicochemical and structural requirements for this biological activity. Highly potent molecules have a symmetrical arrangement of substituents along the x axis, high frequency of distance between N and O atoms at topological distance 9, as well as between C and N atoms at topological distance 10, and more C atoms located at topological distances 6 and 3. Based on the conclusion given in the QSAR analysis, a new compound with possible great activity was proposed.

  14. Active duration estimation of Subur Vallis, a Martian fluvial system

    NASA Astrophysics Data System (ADS)

    Koronczay, David; Kereszturi, Akos

    2017-04-01

    We carried out age estimation and estimation of the active period of a typical, moderately sized fluvial system at Xanthe Terra. Morphology was determined using HRSC and CTX images. Crater size frequency distribution was used to determine the ages of the main terrain units. Based on the channel bed morphology, we used the Darcy-Weisbach resistance equation to estimate the average water flow velocity and discharge. In the next step, we used various sediment transport rate predictors from the literature, to determine the erosion rate, and consequently the likely timescale of the main erosional process creating the channel. We discuss the main sources of uncertainty of our results.

  15. The quantitative assessment of motor activity in mania and schizophrenia

    PubMed Central

    Minassian, Arpi; Henry, Brook L.; Geyer, Mark A.; Paulus, Martin P.; Young, Jared W.; Perry, William

    2009-01-01

    Background Increased motor activity is a cardinal feature of the mania of Bipolar Disorder (BD), and is thought to reflect dopaminergic dysregulation. Motor activity in BD has been studied almost exclusively with self-report and observer-rated scales, limiting the ability to objectively quantify this behavior. We used an ambulatory monitoring device to quantify motor activity in BD and schizophrenia (SCZ) patients in a novel exploratory paradigm, the human Behavioral Pattern Monitor (BPM). Method 28 patients in the manic phase of BD, 17 SCZ patients, and 21 nonpatient (NC) subjects were tested in the BPM, an unfamiliar room containing novel objects. Motor activity was measured with a wearable ambulatory monitoring device (LifeShirt). Results Manic BD patients exhibited higher levels of motor activity when exploring the novel environment than SCZ and NC groups. Motor activity showed some modest relationships with symptom ratings of mania and psychosis and was not related to smoking or body mass index. Limitations Although motor activity did not appear to be impacted significantly by antipsychotic or mood-stabilizing medications, this was a naturalistic study and medications were not controlled, thus limiting conclusions about potential medication effects on motor activity. Conclusion Manic BD patients exhibit a unique signature of motoric overactivity in a novel exploratory environment. The use of an objective method to quantify exploration and motor activity may help characterize the unique aspects of BD and, because it is amenable to translational research, may further the study of the biological and genetic bases of the disease. PMID:19435640

  16. Estimation of haplotype associated with several quantitative phenotypes based on maximization of area under a receiver operating characteristic (ROC) curve.

    PubMed

    Kamitsuji, Shigeo; Kamatani, Naoyuki

    2006-01-01

    An algorithm for estimating haplotypes associated with several quantitative phenotypes is proposed. The concept of a receiver operating characteristic (ROC) curve was introduced, and a linear combination of the quantitative phenotypic values was considered. This set of values was divided into two parts: values for subjects with and without a particular haplotype. The goodness of its partition was evaluated by the area under the ROC curve (AUC). The AUC value varied from 0 to 1; this value was close to 1 when the partition had high accuracy. Therefore, the strength of association between phenotypes and haplotypes was considered to be proportional to the AUC value. In our algorithm, the parameters representing a degree of association between the haplotypes and phenotypes were estimated so as to maximize the AUC value; further, the haplotype with the maximum AUC value was considered to be the best haplotype associated with the phenotypes. This algorithm was implemented by using R language. The effectiveness of our algorithm was evaluated by applying it to real genotype data of the Calpine-10 gene obtained from diabetics. The results showed that our algorithm was more reasonable and advantageous for use with several quantitative phenotypes than the generalized linear model or the neural network model.

  17. Utilization of quantitative structure-activity relationships (QSARs) in risk assessment: Alkylphenols

    SciTech Connect

    Beck, B.D.; Toole, A.P.; Callahan, B.G.; Siddhanti, S.K. )

    1991-12-01

    Alkylphenols are a class of environmentally pervasive compounds, found both in natural (e.g., crude oils) and in anthropogenic (e.g., wood tar, coal gasification waste) materials. Despite the frequent environmental occurrence of these chemicals, there is a limited toxicity database on alkylphenols. The authors have therefore developed a 'toxicity equivalence approach' for alkylphenols which is based on their ability to inhibit, in a specific manner, the enzyme cyclooxygenase. Enzyme-inhibiting ability for individual alkylphenols can be estimated based on the quantitative structure-activity relationship developed by Dewhirst (1980) and is a function of the free hydroxyl group, electron-donating ring substituents, and hydrophobic aromatic ring substituents. The authors evaluated the toxicological significance of cyclooxygenase inhibition by comparison of the inhibitory capacity of alkylphenols with the inhibitory capacity of acetylsalicylic acid, or aspirin, a compound whose low-level effects are due to cyclooxygenase inhibition. Since nearly complete absorption for alkylphenols and aspirin is predicted, based on estimates of hydrophobicity and fraction of charged molecules at gastrointestinal pHs, risks from alkylphenols can be expressed directly in terms of 'milligram aspirin equivalence,' without correction for absorption differences. They recommend this method for assessing risks of mixtures of alkylphenols, especially for those compounds with no chronic toxicity data.38 references.

  18. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  19. Methodologies for the quantitative estimation of toxicant dose to cigarette smokers using physical, chemical and bioanalytical data.

    PubMed

    St Charles, Frank Kelley; McAughey, John; Shepperd, Christopher J

    2013-06-01

    Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10(-5) Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10(-7) Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker.

  20. Methodologies for the quantitative estimation of toxicant dose to cigarette smokers using physical, chemical and bioanalytical data

    PubMed Central

    McAughey, John; Shepperd, Christopher J.

    2013-01-01

    Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10−5 Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10−7 Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker. PMID:23742081

  1. Estimating base rates of impairment in neuropsychological test batteries: a comparison of quantitative models.

    PubMed

    Decker, Scott L; Schneider, W Joel; Hale, James B

    2012-01-01

    Neuropsychologists frequently rely on a battery of neuropsychological tests which are normally distributed to determine impaired functioning. The statistical likelihood of Type I error in clinical decision-making is in part determined by the base rate of normative individuals obtaining atypical performance on neuropsychological tests. Base rates are most accurately obtained by co-normed measures, but this is rarely accomplished in neuropsychological testing. Several statistical methods have been proposed to estimate base rates for tests that are not co-normed. This study compared two statistical approaches (binomial and Monte Carlo models) used to estimate the base rates for flexible test batteries. The two approaches were compared against empirically derived base rates for a multitest co-normed battery of cognitive measures. Estimates were compared across a variety of conditions including age and different α levels (N =3,356). Monte Carlo R(2) estimates ranged from .980 to .997 across five different age groups, indicating a good fit. In contrast, the binomial model fit estimates ranged from 0.387 to 0.646. Results confirm that the binomial model is insufficient for estimating base rates because it does not take into account correlations among measures in a multitest battery. Although the Monte Carlo model produced more accurate results, minor biases occurred that are likely due to skewess and kurtosis of test variables. Implications for future research and applied practice are discussed. © The Author 2011. Published by Oxford University Press. All rights reserved.

  2. Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation.

    PubMed

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed; Pihl, Michael Johannes; Hansen, Kristoffer Lindskov; Stuart, Matthias Bo; Thomsen, Carsten; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-03-01

    Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented. The aim of this paper is to estimate precise flow rates and peak velocities derived from 3-D vector flow estimates. The emission sequence provides 3-D vector flow estimates at up to 1.145 frames/s in a plane, and was used to estimate 3-D vector flow in a cross-sectional image plane. The method is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared with the expected 79.8 L/min, and to 2.68 ± 0.04 mL/stroke in the pulsating environment compared with the expected 2.57 ± 0.08 mL/stroke. Flow rates estimated in the common carotid artery of a healthy volunteer are compared with magnetic resonance imaging (MRI) measured flow rates using a 1-D through-plane velocity sequence. Mean flow rates were 333 ± 31 mL/min for the presented method and 346 ± 2 mL/min for the MRI measurements.

  3. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  4. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  5. PREDICTING TOXICOLOGICAL ENDPOINTS OF CHEMICALS USING QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIPS (QSARS)

    EPA Science Inventory

    Quantitative structure-activity relationships (QSARs) are being developed to predict the toxicological endpoints for untested chemicals similar in structure to chemicals that have known experimental toxicological data. Based on a very large number of predetermined descriptors, a...

  6. PREDICTING TOXICOLOGICAL ENDPOINTS OF CHEMICALS USING QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIPS (QSARS)

    EPA Science Inventory

    Quantitative structure-activity relationships (QSARs) are being developed to predict the toxicological endpoints for untested chemicals similar in structure to chemicals that have known experimental toxicological data. Based on a very large number of predetermined descriptors, a...

  7. Modeling real-time PCR kinetics: Richards reparametrized equation for quantitative estimation of European hake (Merluccius merluccius).

    PubMed

    Sánchez, Ana; Vázquez, José A; Quinteiro, Javier; Sotelo, Carmen G

    2013-04-10

    Real-time PCR is the most sensitive method for detection and precise quantification of specific DNA sequences, but it is not usually applied as a quantitative method in seafood. In general, benchmark techniques, mainly cycle threshold (Ct), are the routine method for quantitative estimations, but they are not the most precise approaches for a standard assay. In the present work, amplification data from European hake (Merluccius merluccius) DNA samples were accurately modeled by three sigmoid reparametrized equations, where the lag phase parameter (λc) from the Richards equation with four parameters was demonstrated to be the perfect substitute for Ct for PCR quantification. The concentrations of primers and probes were subsequently optimized by means of that selected kinetic parameter. Finally, the linear correlation among DNA concentration and λc was also confirmed.

  8. Rapid End-Point Quantitation of Prion Seeding Activity with Sensitivity Comparable to Bioassays

    PubMed Central

    Wilham, Jason M.; Orrú, Christina D.; Bessen, Richard A.; Atarashi, Ryuichiro; Sano, Kazunori; Race, Brent; Meade-White, Kimberly D.; Taubner, Lara M.; Timmes, Andrew; Caughey, Byron

    2010-01-01

    A major problem for the effective diagnosis and management of prion diseases is the lack of rapid high-throughput assays to measure low levels of prions. Such measurements have typically required prolonged bioassays in animals. Highly sensitive, but generally non-quantitative, prion detection methods have been developed based on prions' ability to seed the conversion of normally soluble protease-sensitive forms of prion protein to protease-resistant and/or amyloid fibrillar forms. Here we describe an approach for estimating the relative amount of prions using a new prion seeding assay called real-time quaking induced conversion assay (RT-QuIC). The underlying reaction blends aspects of the previously described quaking-induced conversion (QuIC) and amyloid seeding assay (ASA) methods and involves prion-seeded conversion of the alpha helix-rich form of bacterially expressed recombinant PrPC to a beta sheet-rich amyloid fibrillar form. The RT-QuIC is as sensitive as the animal bioassay, but can be accomplished in 2 days or less. Analogous to end-point dilution animal bioassays, this approach involves testing of serial dilutions of samples and statistically estimating the seeding dose (SD) giving positive responses in 50% of replicate reactions (SD50). Brain tissue from 263K scrapie-affected hamsters gave SD50 values of 1011-1012/g, making the RT-QuIC similar in sensitivity to end-point dilution bioassays. Analysis of bioassay-positive nasal lavages from hamsters affected with transmissible mink encephalopathy gave SD50 values of 103.5–105.7/ml, showing that nasal cavities release substantial prion infectivity that can be rapidly detected. Cerebral spinal fluid from 263K scrapie-affected hamsters contained prion SD50 values of 102.0–102.9/ml. RT-QuIC assay also discriminated deer chronic wasting disease and sheep scrapie brain samples from normal control samples. In principle, end-point dilution quantitation can be applied to many types of prion and amyloid seeding

  9. Extrapolated withdrawal-interval estimator (EWE) algorithm: a quantitative approach to establishing extralabel withdrawal times.

    PubMed

    Martín-Jiménez, Tomás; Baynes, Ronald E; Craigmill, Arthur; Riviere, Jim E

    2002-08-01

    The extralabel use of drugs can be defined as the use of drugs in a manner inconsistent with their FDA-approved labeling. The passage of the Animal Medicinal Drug Use Clarification Act (AMDUCA) in 1994 and its implementation by the FDA-Center for Veterinary Medicine in 1996 has allowed food animal veterinarians to use drugs legally in an extralabel manner, as long as an appropriate withdrawal period is established. The present study introduces and validates with simulated and experimental data the Extrapolated Withdrawal-Period Estimator (EWE) Algorithm, a procedure aimed at predicting extralabel withdrawal intervals (WDIs) based on the label and pharmacokinetic literature data contained in the Food Animal Residue Avoidance Databank (FARAD). This is the initial and first attempt at consistently obtaining WDI estimates that encompass a reasonable degree of statistical soundness. Data on the determination of withdrawal times after the extralabel use of the antibiotic oxytetracycline were obtained both with simulated disposition data and from the literature. A withdrawal interval was computed using the EWE Algorithm for an extralabel dose of 25 mg/kg (simulation study) and for a dose of 40 mg/kg (literature data). These estimates were compared with the withdrawal times computed with the simulated data and with the literature data, respectively. The EWE estimates of WDP for a simulated extralabel dose of 25 mg/kg was 39 days. The withdrawal time (WDT) obtained for this dose on a tissue depletion study was 39 days. The EWE estimate of WDP for an extralabel intramuscular dose of 40 mg/kg in cattle, based on the kinetic data contained in the FARAD database, was 48 days. The withdrawal time experimentally obtained for similar use of this drug was 49 days. The EWE Algorithm can obtain WDI estimates that encompass the same degree of statistical soundness as the WDT estimates, provided that the assumptions of the approved dosage regimen hold for the extralabel dosage regimen

  10. Modeling Bone Surface Morphology: A Fully Quantitative Method for Age-at-Death Estimation Using the Pubic Symphysis.

    PubMed

    Slice, Dennis E; Algee-Hewitt, Bridget F B

    2015-07-01

    The pubic symphysis is widely used in age estimation for the adult skeleton. Standard practice requires the visual comparison of surface morphology against criteria representing predefined phases and the estimation of case-specific age from an age range associated with the chosen phase. Known problems of method and observer error necessitate alternative tools to quantify age-related change in pubic morphology. This paper presents an objective, fully quantitative method for estimating age-at-death from the skeleton, which exploits a variance-based score of surface complexity computed from vertices obtained from a scanner sampling the pubic symphysis. For laser scans from 41 modern American male skeletons, this method produces results that are significantly associated with known age-at-death (RMSE = 17.15 years). Chronological age is predicted, therefore, equally well, if not, better, with this robust, objective, and fully quantitative method than with prevailing phase-aging systems. This method contributes to forensic casework by responding to medico-legal expectations for evidence standards.

  11. Ultrasonic backscatter coefficient quantitative estimates from high-concentration Chinese hamster ovary cell pellet biophantoms

    PubMed Central

    Han, Aiguo; Abuhabsah, Rami; Blue, James P.; Sarwate, Sandhya; O’Brien, William D.

    2011-01-01

    Previous work estimated the ultrasonic backscatter coefficient (BSC) from low-concentration (volume density < 3%) Chinese Hamster Ovary (CHO, 6.7 -μm cell radius) cell pellets. This study extends the work to higher cell concentrations (volume densities: 9.6% to 63%). At low concentration, BSC magnitude is proportional to the cell concentration and BSC frequency dependency is independent of cell concentration. At high cell concentration, BSC magnitude is not proportional to cell concentration and BSC frequency dependency is dependent on cell concentration. This transition occurs when the volume density reaches between 10% and 30%. Under high cell concentration conditions, the BSC magnitude increases slower than proportionally with the number density at low frequencies (ka < 1), as observed by others. However, what is new is that the BSC magnitude can increase either slower or faster than proportionally with number density at high frequencies (ka > 1). The concentric sphere model least squares estimates show a decrease in estimated cell radius with number density, suggesting that the concentric spheres model is becoming less applicable as concentration increases because the estimated cell radius becomes smaller than that measured. The critical volume density, starting from when the model becomes less applicable, is estimated to be between 10% and 30% cell volume density. PMID:22225068

  12. A Unified Maximum Likelihood Framework for Simultaneous Motion and $T_{1}$ Estimation in Quantitative MR $T_{1}$ Mapping.

    PubMed

    Ramos-Llorden, Gabriel; den Dekker, Arnold J; Van Steenkiste, Gwendolyn; Jeurissen, Ben; Vanhevel, Floris; Van Audekerke, Johan; Verhoye, Marleen; Sijbers, Jan

    2017-02-01

    In quantitative MR T1 mapping, the spin-lattice relaxation time T1 of tissues is estimated from a series of T1 -weighted images. As the T1 estimation is a voxel-wise estimation procedure, correct spatial alignment of the T1 -weighted images is crucial. Conventionally, the T1 -weighted images are first registered based on a general-purpose registration metric, after which the T1 map is estimated. However, as demonstrated in this paper, such a two-step approach leads to a bias in the final T1 map. In our work, instead of considering motion correction as a preprocessing step, we recover the motion-free T1 map using a unified estimation approach. In particular, we propose a unified framework where the motion parameters and the T1 map are simultaneously estimated with a Maximum Likelihood (ML) estimator. With our framework, the relaxation model, the motion model as well as the data statistics are jointly incorporated to provide substantially more accurate motion and T1 parameter estimates. Experiments with realistic Monte Carlo simulations show that the proposed unified ML framework outperforms the conventional two-step approach as well as state-of-the-art model-based approaches, in terms of both motion and T1 map accuracy and mean-square error. Furthermore, the proposed method was additionally validated in a controlled experiment with real T1 -weighted data and with two in vivo human brain T1 -weighted data sets, showing its applicability in real-life scenarios.

  13. Improved TLC Bioautographic Assay for Qualitative and Quantitative Estimation of Tyrosinase Inhibitors in Natural Products.

    PubMed

    Zhou, Jinge; Tang, Qingjiu; Wu, Tao; Cheng, Zhihong

    2017-03-01

    TLC bioautography for tyrosinase inhibitors has made recent progress; however, an assay with a relative low consumption of enzyme and quantitative capability would greatly advance the efficacy of related TLC bioautographic assays. An improved TLC bioautographic assay for detecting tyrosinase inhibitors was developed and validated in this study. L-DOPA (better water-solubility than L-tyrosine) was used as the substrate instead of reported L-tyrosine. The effects of enzyme and substrate concentrations, reaction temperatures and times, and pH values of the reaction system as well as different plate types on the TLC bioautographic assay were optimised. The quantitative analysis was conducted by densitometric scanning of spot areas, and expressed as the relative tyrosinase inhibitory capacity (RTIC) using a positive control (kojic acid) equivalent. The limit of detection (LOD) of this assay was 1.0 ng for kojic acid. This assay has acceptable accuracy (101.73-102.90%), intra- and inter-day, and intra- and inter-plate precisions [relative standard deviation (RSD), less than 7.0%], and ruggedness (RSD, less than 3.5%). The consumption of enzyme (75 U/mL) is relatively low. Two tyrosinase inhibitory compounds including naringenin and 1-O-β-D-glucopyranosyl-4-allylbenzene have been isolated from Rhodiola sacra guided by this TLC bioautographic assay. Our improved assay is a relatively low-cost, sensitive, and quantitative method compared to the reported TLC bioautographic assays. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Statistical estimation of correlated genome associations to a quantitative trait network.

    PubMed

    Kim, Seyoung; Xing, Eric P

    2009-08-01

    Many complex disease syndromes, such as asthma, consist of a large number of highly related, rather than independent, clinical or molecular phenotypes. This raises a new technical challenge in identifying genetic variations associated simultaneously with correlated traits. In this study, we propose a new statistical framework called graph-guided fused lasso (GFlasso) to directly and effectively incorporate the correlation structure of multiple quantitative traits such as clinical metrics and gene expressions in association analysis. Our approach represents correlation information explicitly among the quantitative traits as a quantitative trait network (QTN) and then leverages this network to encode structured regularization functions in a multivariate regression model over the genotypes and traits. The result is that the genetic markers that jointly influence subgroups of highly correlated traits can be detected jointly with high sensitivity and specificity. While most of the traditional methods examined each phenotype independently and combined the results afterwards, our approach analyzes all of the traits jointly in a single statistical framework. This allows our method to borrow information across correlated phenotypes to discover the genetic markers that perturb a subset of the correlated traits synergistically. Using simulated datasets based on the HapMap consortium and an asthma dataset, we compared the performance of our method with other methods based on single-marker analysis and regression-based methods that do not use any of the relational information in the traits. We found that our method showed an increased power in detecting causal variants affecting correlated traits. Our results showed that, when correlation patterns among traits in a QTN are considered explicitly and directly during a structured multivariate genome association analysis using our proposed methods, the power of detecting true causal SNPs with possibly pleiotropic effects increased

  15. Methods for the quantitative comparison of molecular estimates of clade age and the fossil record.

    PubMed

    Clarke, Julia A; Boyd, Clint A

    2015-01-01

    Approaches quantifying the relative congruence, or incongruence, of molecular divergence estimates and the fossil record have been limited. Previously proposed methods are largely node specific, assessing incongruence at particular nodes for which both fossil data and molecular divergence estimates are available. These existing metrics, and other methods that quantify incongruence across topologies including entirely extinct clades, have so far not taken into account uncertainty surrounding both the divergence estimates and the ages of fossils. They have also treated molecular divergence estimates younger than previously assessed fossil minimum estimates of clade age as if they were the same as cases in which they were older. However, these cases are not the same. Recovered divergence dates younger than compared oldest known occurrences require prior hypotheses regarding the phylogenetic position of the compared fossil record and standard assumptions about the relative timing of morphological and molecular change to be incorrect. Older molecular dates, by contrast, are consistent with an incomplete fossil record and do not require prior assessments of the fossil record to be unreliable in some way. Here, we compare previous approaches and introduce two new descriptive metrics. Both metrics explicitly incorporate information on uncertainty by utilizing the 95% confidence intervals on estimated divergence dates and data on stratigraphic uncertainty concerning the age of the compared fossils. Metric scores are maximized when these ranges are overlapping. MDI (minimum divergence incongruence) discriminates between situations where molecular estimates are younger or older than known fossils reporting both absolute fit values and a number score for incompatible nodes. DIG range (divergence implied gap range) allows quantification of the minimum increase in implied missing fossil record induced by enforcing a given set of molecular-based estimates. These metrics are used

  16. Satellite estimation of incident photosynthetically active radiation using ultraviolet reflectance

    NASA Technical Reports Server (NTRS)

    Eck, Thomas F.; Dye, Dennis G.

    1991-01-01

    A new satellite remote sensing method for estimating the amount of photosynthetically active radiation (PAR, 400-700 nm) incident at the earth's surface is described and tested. Potential incident PAR for clear sky conditions is computed from an existing spectral model. A major advantage of the UV approach over existing visible band approaches to estimating insolation is the improved ability to discriminate clouds from high-albedo background surfaces. UV spectral reflectance data from the Total Ozone Mapping Spectrometer (TOMS) were used to test the approach for three climatically distinct, midlatitude locations. Estimates of monthly total incident PAR from the satellite technique differed from values computed from ground-based pyranometer measurements by less than 6 percent. This UV remote sensing method can be applied to estimate PAR insolation over ocean and land surfaces which are free of ice and snow.

  17. Comparison between geochemical and biological estimates of subsurface microbial activities.

    PubMed

    Phelps, T J; Murphy, E M; Pfiffner, S M; White, D C

    1994-01-01

    Geochemical and biological estimates of in situ microbial activities were compared from the aerobic and microaerophilic sediments of the Atlantic Coastal Plain. Radioisotope time-course experiments suggested oxidation rates greater than millimolar quantities per year for acetate and glucose. Geochemical analyses assessing oxygen consumption, soluble organic carbon utilization, sulfate reduction, and carbon dioxide production suggested organic oxidation rates of nano- to micromolar quantities per year. Radiotracer timecourse experiments appeared to overestimate rates of organic carbon oxidation, sulfate reduction, and biomass production by a factor of 10(3)-10(6) greater than estimates calculated from groundwater analyses. Based on the geochemical evidence, in situ microbial metabolism was estimated to be in the nano- to micromolar range per year, and the average doubling time for the microbial community was estimated to be centuries.

  18. FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts

    PubMed Central

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304

  19. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    PubMed

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  20. Human ECG signal parameters estimation during controlled physical activity

    NASA Astrophysics Data System (ADS)

    Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz

    2015-09-01

    ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.

  1. Soil carbon storage estimation in a forested watershed using quantitative soil-landscape modeling

    Treesearch

    James A. Thompson; Randall K. Kolka

    2005-01-01

    Carbon storage in soils is important to forest ecosystems. Moreover, forest soils may serve as important C sinks for ameliorating excess atmospheric CO2. Spatial estimates of soil organic C (SOC) storage have traditionally relied upon soil survey maps and laboratory characterization data. This approach does not account for inherent variability...

  2. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  3. Toward a Quantitative Estimate of Future Heat Wave Mortality under Global Climate Change

    PubMed Central

    Peng, Roger D.; Bobb, Jennifer F.; Tebaldi, Claudia; McDaniel, Larry; Bell, Michelle L.; Dominici, Francesca

    2011-01-01

    Background Climate change is anticipated to affect human health by changing the distribution of known risk factors. Heat waves have had debilitating effects on human mortality, and global climate models predict an increase in the frequency and severity of heat waves. The extent to which climate change will harm human health through changes in the distribution of heat waves and the sources of uncertainty in estimating these effects have not been studied extensively. Objectives We estimated the future excess mortality attributable to heat waves under global climate change for a major U.S. city. Methods We used a database comprising daily data from 1987 through 2005 on mortality from all nonaccidental causes, ambient levels of particulate matter and ozone, temperature, and dew point temperature for the city of Chicago, Illinois. We estimated the associations between heat waves and mortality in Chicago using Poisson regression models. Results Under three different climate change scenarios for 2081–2100 and in the absence of adaptation, the city of Chicago could experience between 166 and 2,217 excess deaths per year attributable to heat waves, based on estimates from seven global climate models. We noted considerable variability in the projections of annual heat wave mortality; the largest source of variation was the choice of climate model. Conclusions The impact of future heat waves on human health will likely be profound, and significant gains can be expected by lowering future carbon dioxide emissions. PMID:21193384

  4. A quantitative framework for estimating risk of collision between marine mammals and boats

    USGS Publications Warehouse

    Martin, Julien; Sabatier, Quentin; Gowan, Timothy A.; Giraud, Christophe; Gurarie, Eliezer; Calleson, Scott; Ortega-Ortiz, Joel G.; Deutsch, Charles J.; Rycyk, Athena; Koslovsky, Stacie M.

    2016-01-01

    By applying encounter rate theory to the case of boat collisions with marine mammals, we gained new insights about encounter processes between wildlife and watercraft. Our work emphasizes the importance of considering uncertainty when estimating wildlife mortality. Finally, our findings are relevant to other systems and ecological processes involving the encounter between moving agents.

  5. Quantitative estimates of reaction induced pressures: an example from the Norwegian Caledonides.

    NASA Astrophysics Data System (ADS)

    Vrijmoed, Johannes C.; Podladchikov, Yuri Y.

    2013-04-01

    Estimating the pressure and temperature of metamorphic rocks is fundamental to the understanding of geodynamics. It is therefore important to determine the mechanisms that were responsible for the pressure and temperature obtained from metamorphic rocks. Both pressure and temperature increase with depth in the Earth. Whereas temperature can vary due to local heat sources such as magmatic intrusions, percolation of hot fluids or deformation in shear zones, pressure in petrology is generally assumed to vary homogeneously with depth. However, fluid injection into veins, development of pressure shadows around porphyroblasts, fracturing and folding of rocks all involve variations in stress and therefore also in pressure (mean stress). Volume change during phase transformations or mineral reactions have the potential to build pressure if they proceed faster than the minerals or rocks can deform to accommodate the volume change. This mechanism of pressure generation does not require the rocks to be under differential stress, it may lead however to the development of local differential stress. The Western Gneiss Region (WGR) is a basement window within the Norwegian Caledonides. This area is well known for its occurrences of HP to UHP rocks, mainly found as eclogite boudins and lenses and more rarely within felsic gneisses. Present observations document a regional metamorphic gradient increasing towards the NW, and structures in the field can account for the exhumation of the (U)HP rocks from ~2.5 to 3 GPa. Locally however, mineralogical and geothermobarometric evidence points to metamorphic pressure up to 4 GPa. These locations present an example of local extreme pressure excursions from the regional and mostly coherent metamorphic gradient that are difficult to account for by present day structural field observations. Detailed structural, petrological, mineralogical, geochemical and geochronological study at the Svartberget UHP diamond locality have shown the injection

  6. Commercial Activities in Primary Schools: A Quantitative Study

    ERIC Educational Resources Information Center

    Raine, Gary

    2007-01-01

    The commercialisation of schools is a controversial issue, but very little is known about the actual situation in UK schools. The aim of this study was to investigate, with particular reference to health education and health promotion, commercial activities and their regulation in primary schools in the Yorkshire and Humber region of the UK. A…

  7. Commercial Activities in Primary Schools: A Quantitative Study

    ERIC Educational Resources Information Center

    Raine, Gary

    2007-01-01

    The commercialisation of schools is a controversial issue, but very little is known about the actual situation in UK schools. The aim of this study was to investigate, with particular reference to health education and health promotion, commercial activities and their regulation in primary schools in the Yorkshire and Humber region of the UK. A…

  8. Quantitative determination of effective nibbling activities contaminating restriction endonuclease preparations.

    PubMed

    Hashimoto-Gotoh, T

    1995-10-10

    A simple and sensitive procedure with which to detect residual exonucleolytic nibbling activities contaminating restriction endonuclease preparations is described. The procedure uses the kyosei-plasmid, pKF4, which confers kanamycin resistance and enforces streptomycin sensitivity encoded by the trp promoter/operator-driven rpsL+amber(PO(trp)-rpsL+4(am)) gene onto Escherichia coli streptomycin-resistant, amber-suppressive, trp repressor-negative strains such as TH5. When TH5 cells transformed by pKF4 were selected on agar medium containing kanamycin plus streptomycin, the efficiency of transformation plating was substantially lower than that on agar containing kanamycin alone. However, when pKF4 DNA was digested by restriction enzymes that cut once per molecule within PO(trp)-rpsL+4(am) and relegated, the plating efficiency increased depending on the degree of contamination of exonucleolytic nibbling activities in the enzyme preparations, due to deletion mutation at the ligand junction. Plating efficiency was converted to "effective nibbling activity" corresponding to Bal31 nuclease-equivalent units. Using this procedure, effective nibbling activities were detected in 17 of 34 commercial samples of restriction enzymes tested. The method is simple and more sensitive than the procedures used by the commercial suppliers and it is applicable to the quality control testing of more than 100 restriction enzymes.

  9. EIA Corrects Errors in Its Drilling Activity Estimates Series

    EIA Publications

    1998-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  10. EIA Completes Corrections to Drilling Activity Estimates Series

    EIA Publications

    1999-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  11. Evaluating spatial patterns of a distributed hydrological model forced by polarimetric radar based quantitative precipitation estimation

    NASA Astrophysics Data System (ADS)

    He, X.; Sonnenborg, T. O.; Koch, J.; Zheng, C.; Jensen, K. H.

    2016-12-01

    Precipitation is the main driver to all hydrological processes in the terrestrial water cycle. Estimation of precipitation at catchment scale usually involves two aspects: the areal mean, which is the averaged amount across the entire catchment, and the spatial pattern, which refers to the internal distribution of precipitation within the catchment. Areal mean precipitation can be derived from rain gauge observations, whereas spatial patterns can be estimated by interpolating of point data or range scanning. Weather radar is a range scanning instrument which has the advantage of high spatial and temporal resolution, full automation and large spatial coverage. While estimating the areal mean is still more reliable with rain gauge measurements, the spatial patterns, on the other hand, is clearly delineated better by radar estimated precipitation. In the present study, we investigate the impact of diverging spatial pattern information of different precipitation estimates on distributed hydrological modeling. Precipitation products are used as forcing data and are retrieved from 1) interpolation of rain gauge data, 2) conventional radar data, single-pol data, and 3) polarimetric radar data, dual-pol data. Four years of continuous hourly precipitation are prepared and incorporated in a coupled MIKE SHE - SWET model for a catchment in western Denmark. One of the innovative contributions of the study is the application of Empirical Orthogonal Function analysis to evaluate spatial pattern similarity, which enables a true pattern comparison of the simulated hydrological variables. The results suggest that all models are able to generate comparable hydrological simulations in terms of stream discharge and groundwater elevation. Similarity of precipitation patterns decrease with increasing rainfall intensity. Analyzing scale dependency of simulated hydrological components to rainfall patterns reveals that significant variations are observed below 100 km2. Based on the above

  12. Quantitative measurements of active Ionian volcanoes in Galileo NIMS data

    NASA Astrophysics Data System (ADS)

    Saballett, Sebastian; Rathbun, Julie A.; Lopes, Rosaly M. C.; Spencer, John R.

    2016-10-01

    Io is the most volcanically active body in our solar system. The spatial distribution of volcanoes a planetary body's surface gives clues into its basic inner workings (i.e., plate tectonics on earth). Tidal heating is the major contributor to active surface geology in the outer solar system, and yet its mechanism is not completely understood. Io's volcanoes are the clearest signature of tidal heating and measurements of the total heat output and how it varies in space and time are useful constraints on tidal heating. Hamilton et al. (2013) showed through a nearest neighbor analysis that Io's hotspots are globally random, but regionally uniform near the equator. Lopes-Gautier et al. (1999) compared the locations of hotspots detected by NIMS to the spatial variation of heat flow predicted by two end-member tidal heating models. They found that the distribution of hotspots is more consistent with tidal heating occurring in asthenosphere rather than the mantle. Hamilton et al. (2013) demonstrate that clustering of hotspots also supports a dominant role for asthenosphere heating. These studies were unable to account for the relative brightness of the hotspots. Furthermore, studies of the temporal variability of Ionian volcanoes have yielded substantial insight into their nature. The Galileo Near Infrared Mapping Spectrometer (NIMS) gave us a large dataset from which to observe active volcanic activity. NIMS made well over 100 observations of Io over an approximately 10-year time frame. With wavelengths spanning from 0.7 to 5.2 microns, it is ideally suited to measure blackbody radiation from surfaces with temperatures over 300 K. Here, we report on our effort to determine the activity level of each hotspot observed in the NIMS data. We decide to use 3.5 micron brightness as a proxy for activity level because it will be easy to compare to, and incorporate, ground-based observations. We fit a 1-temperature blackbody to spectra in each grating position and averaged the

  13. Distribution and Quantitative Estimates of Variant Creutzfeldt-Jakob Disease Prions in Tissues of Clinical and Asymptomatic Patients

    PubMed Central

    Douet, Jean Y.; Lacroux, Caroline; Aron, Naima; Head, Mark W.; Lugan, Séverine; Tillier, Cécile; Huor, Alvina; Cassard, Hervé; Arnold, Mark; Beringue, Vincent; Ironside, James W.

    2017-01-01

    In the United-Kingdom, ≈1 of 2,000 persons could be infected with variant Creutzfeldt-Jakob disease (vCJD). Therefore, risk of transmission of vCJD by medical procedures remains a major concern for public health authorities. In this study, we used in vitro amplification of prions by protein misfolding cyclic amplification (PMCA) to estimate distribution and level of the vCJD agent in 21 tissues from 4 patients who died of clinical vCJD and from 1 asymptomatic person with vCJD. PMCA identified major levels of vCJD prions in a range of tissues, including liver, salivary gland, kidney, lung, and bone marrow. Bioassays confirmed that the quantitative estimate of levels of vCJD prion accumulation provided by PMCA are indicative of vCJD infectivity levels in tissues. Findings provide critical data for the design of measures to minimize risk for iatrogenic transmission of vCJD. PMID:28518033

  14. FT-IR method development and validation for quantitative estimation of zidovudine in bulk and tablet dosage form.

    PubMed

    Bansal, R; Guleria, A; Acharya, P C

    2013-04-01

    New, simple and cost effective infrared spectroscopic method has been developed for the estimation of zidovudine (CAS 30516-87-1) in bulk and tablet dosage form. The quantitative analysis of zidovudine was carried out in solid form using KBr pellet method and in liquid form using quartz cuvette. These methods were developed and validated for various parameters according to ICH guidelines. Linearity range was found to be 0.8-1.6% w/w in KBr pellet method and 250-1 500 μg ml-1 in solution. The proposed methods were successfully applied for the determination of zidovudine in pharmaceutical formulation (tablets). The results demonstrated that the proposed methods are accurate, precise and reproducible (relative standard deviation<2%), while being simple, economical and less time consuming than other available methods and can be used for estimation of zidovudine in different dosage forms. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Evaluation of two "integrated" polarimetric Quantitative Precipitation Estimation (QPE) algorithms at C-band

    NASA Astrophysics Data System (ADS)

    Tabary, Pierre; Boumahmoud, Abdel-Amin; Andrieu, Hervé; Thompson, Robert J.; Illingworth, Anthony J.; Le Bouar, Erwan; Testud, Jacques

    2011-08-01

    SummaryTwo so-called "integrated" polarimetric rate estimation techniques, ZPHI ( Testud et al., 2000) and ZZDR ( Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term "integrated" means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282 R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h -1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R1.66), a -19% underestimation with ZPHI and a +23

  16. Quantitative PCR estimates Angiostrongylus cantonensis (rat lungworm) infection levels in semi-slugs (Parmarion martensi)

    PubMed Central

    Jarvi, Susan I.; Farias, Margaret E.M.; Howe, Kay; Jacquier, Steven; Hollingsworth, Robert; Pitt, William

    2013-01-01

    The life cycle of the nematode Angiostrongylus cantonensis involves rats as the definitive host and slugs and snails as intermediate hosts. Humans can become infected upon ingestion of intermediate or paratenic (passive carrier) hosts containing stage L3 A. cantonensis larvae. Here, we report a quantitative PCR (qPCR) assay that provides a reliable, relative measure of parasite load in intermediate hosts. Quantification of the levels of infection of intermediate hosts is critical for determining A. cantonensis intensity on the Island of Hawaii. The identification of high intensity infection ‘hotspots’ will allow for more effective targeted rat and slug control measures. qPCR appears more efficient and sensitive than microscopy and provides a new tool for quantification of larvae from intermediate hosts, and potentially from other sources as well. PMID:22902292

  17. Improved Direct Viable Count Procedure for Quantitative Estimation of Bacterial Viability in Freshwater Environments

    PubMed Central

    Yokomaku, Daisaku; Yamaguchi, Nobuyasu; Nasu, Masao

    2000-01-01

    A direct viable count (DVC) procedure was developed which clearly and easily discriminates the viability of bacterial cells. In this quantitative DVC (qDVC) procedure, viable cells are selectively lysed by spheroplast formation caused by incubation with antibiotics and glycine. This glycine effect leads to swollen cells with a very loose cell wall. The viable cells then are lysed easily by a single freeze-thaw treatment. The number of viable cells was obtained by subtracting the number of remaining cells after the qDVC procedure from the total cell number before the qDVC incubation. This improved procedure should provide useful information about the metabolic potential of natural bacterial communities. PMID:11097948

  18. Skill Assessment of An Hybrid Technique To Estimate Quantitative Precipitation Forecast For Galicia (nw Spain)

    NASA Astrophysics Data System (ADS)

    Lage, A.; Taboada, J. J.

    Precipitation is the most obvious of the weather elements in its effects on normal life. Numerical weather prediction (NWP) is generally used to produce quantitative precip- itation forecast (QPF) beyond the 1-3 h time frame. These models often fail to predict small-scale variations of rain because of spin-up problems and their coarse spatial and temporal resolution (Antolik, 2000). Moreover, there are some uncertainties about the behaviour of the NWP models in extreme situations (de Bruijn and Brandsma, 2000). Hybrid techniques, combining the benefits of NWP and statistical approaches in a flexible way, are very useful to achieve a good QPF. In this work, a new technique of QPF for Galicia (NW of Spain) is presented. This region has a percentage of rainy days per year greater than 50% with quantities that may cause floods, with human and economical damages. The technique is composed of a NWP model (ARPS) and a statistical downscaling process based on an automated classification scheme of at- mospheric circulation patterns for the Iberian Peninsula (J. Ribalaygua and R. Boren, 1995). Results show that QPF for Galicia is improved using this hybrid technique. [1] Antolik, M.S. 2000 "An Overview of the National Weather Service's centralized statistical quantitative precipitation forecasts". Journal of Hydrology, 239, pp:306- 337. [2] de Bruijn, E.I.F and T. Brandsma "Rainfall prediction for a flooding event in Ireland caused by the remnants of Hurricane Charley". Journal of Hydrology, 239, pp:148-161. [3] Ribalaygua, J. and Boren R. "Clasificación de patrones espaciales de precipitación diaria sobre la España Peninsular". Informes N 3 y 4 del Servicio de Análisis e Investigación del Clima. Instituto Nacional de Meteorología. Madrid. 53 pp.

  19. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.

    PubMed

    Pillai, S; Singhvi, I

    2008-09-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.

  20. Quantitative Estimation of Itopride Hydrochloride and Rabeprazole Sodium from Capsule Formulation

    PubMed Central

    Pillai, S.; Singhvi, I.

    2008-01-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C18 column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies. PMID:21394269

  1. Quantitative estimation of parthenolide in Tanacetum parthenium (L.) Schultz-Bip. cultivated in Egypt.

    PubMed

    El-Shamy, Ali M; El-Hawary, Seham S; Rateb, Mostafa E M

    2007-01-01

    Parthenolide, a germacranolide-type sesquiterpene lactone, was estimated in Tanacetum parthenium (L.) cultivated in Egypt by using colorimetric, planar chromatographic, and high-performance liquid chromatographic (HPLC) methods. Parthenolide levels in the open-field herb and aseptically germinated shoots were also compared by using the HPLC method. Parthenolide was produced and estimated for the first time in the callus culture of the plant. In addition, 2 Egyptian market preparations were analyzed for their parthenolide content by using the HPLC method. The relative standard deviations were 0.093, 0.095, and 0.098% (n = 5, 5, and 7, respectively), and the corresponding recoveries were 98.2, 98.9, and 99.4% for the colorimetric, planar chromatographic, and HPLC determinations, respectively.

  2. Interspecies quantitative structure-activity-activity relationships (QSAARs) for prediction of acute aquatic toxicity of aromatic amines and phenols.

    PubMed

    Furuhama, A; Hasunuma, K; Aoki, Y

    2015-01-01

    We propose interspecies quantitative structure-activity-activity relationships (QSAARs), that is, QSARs with descriptors, to estimate species-specific acute aquatic toxicity. Using training datasets consisting of more than 100 aromatic amines and phenols, we found that the descriptors that predicted acute toxicities to fish (Oryzias latipes) and algae were daphnia toxicity, molecular weight (an indicator of molecular size and uptake) and selected indicator variables that discriminated between the absence or presence of various substructures. Molecular weight and the selected indicator variables improved the goodness-of-fit of the fish and algae toxicity prediction models. External validations of the QSAARs proved that algae toxicity could be predicted within 1.0 log unit and revealed structural profiles of outlier chemicals with respect to fish toxicity. In addition, applicability domains based on leverage values provided structural alerts for the predicted fish toxicity of chemicals with more than one hydroxyl or amino group attached to an aromatic ring, but not for fluoroanilines, which were not included in the training dataset. Although these simple QSAARs have limitations, their applicability is defined so clearly that they may be practical for screening chemicals with molecular weights of ≤364.9.

  3. Quantitative television fluoroangiography - the optical measurement of dye concentrations and estimation of retinal blood flow

    SciTech Connect

    Greene, M.; Thomas, A.L. Jr.

    1985-06-01

    The development of a system for the measurement of dye concentrations from single retinal vessels during retinal fluorescein angiography is presented and discussed. The system uses a fundus camera modified for TV viewing. Video gating techniques define the areas of the retina to be studied, and video peak detection yields dye concentrations from retinal vessels. The time course of dye concentration is presented and blood flow into the retina is estimated by a time of transit technique.

  4. Methane emission estimation from landfills in Korea (1978-2004): quantitative assessment of a new approach.

    PubMed

    Kim, Hyun-Sun; Yi, Seung-Muk

    2009-01-01

    Quantifying methane emission from landfills is important to evaluating measures for reduction of greenhouse gas (GHG) emissions. To quantify GHG emissions and identify sensitive parameters for their measurement, a new assessment approach consisting of six different scenarios was developed using Tier 1 (mass balance method) and Tier 2 (the first-order decay method) methodologies for GHG estimation from landfills, suggested by the Intergovernmental Panel on Climate Change (IPCC). Methane emissions using Tier 1 correspond to trends in disposed waste amount, whereas emissions from Tier 2 gradually increase as disposed waste decomposes over time. The results indicate that the amount of disposed waste and the decay rate for anaerobic decomposition were decisive parameters for emission estimation using Tier 1 and Tier 2. As for the different scenarios, methane emissions were highest under Scope 1 (scenarios I and II), in which all landfills in Korea were regarded as one landfill. Methane emissions under scenarios III, IV, and V, which separated the dissimilated fraction of degradable organic carbon (DOC(F)) by waste type and/or revised the methane correction factor (MCF) by waste layer, were underestimated compared with scenarios II and III. This indicates that the methodology of scenario I, which has been used in most previous studies, may lead to an overestimation of methane emissions. Additionally, separate DOC(F) and revised MCF were shown to be important parameters for methane emission estimation from landfills, and revised MCF by waste layer played an important role in emission variations. Therefore, more precise information on each landfill and careful determination of parameter values and characteristics of disposed waste in Korea should be used to accurately estimate methane emissions from landfills.

  5. Coupling radar and lightning data to improve the quantitative estimation of precipitation

    NASA Astrophysics Data System (ADS)

    François, B.; Molinié, G.; Betz, H. D.

    2009-09-01

    Forecasts in hydrology require rainfall intensity estimations at temporal scale of few tens of minutes and at spatial scales of few kilometer squares. Radars are the most efficient apparatus to provide such data. However, estimate the rainfall intensity (R) from the radar reflectivity (Z) is based on empirical Z-R relationships which are not robusts. Indeed, the Z-R relationships depend on hydrometeor types. The role of Lightning flashes in thunderclouds is to relax the electrical constraints. Generations of thundercloud electrical charges are due to thermodynamical and microphysical processes. Based on these physical considerations, Blyth et al. (2001) have derived a relationship between the product of ascending and descending hydrometeor fluxes and the lightning flash rate. Deierling et al. (2008) succesfully applied this relationship to data from the STERAO-A and STEPS field campains. We have applied the methodology described in Deierling et al. (2008) to operational radar (Météo-France network) and lightning (LINET) data. As these data don't allow to compute the ascending hydrometeor flux and as the descending mass flux is highly parameterized, thundercloud simulations (MésoNH) are used to assess the role of ascending fluxes and the estimated precipitating fluxes. In order to assess the budget of the Blyth et al. (2008) equation terms, the electrified version of MésoNH, including lightning, is run.

  6. [Non-parametric Bootstrap estimation on the intraclass correlation coefficient generated from quantitative hierarchical data].

    PubMed

    Liang, Rong; Zhou, Shu-dong; Li, Li-xia; Zhang, Jun-guo; Gao, Yan-hui

    2013-09-01

    This paper aims to achieve Bootstraping in hierarchical data and to provide a method for the estimation on confidence interval(CI) of intraclass correlation coefficient(ICC).First, we utilize the mixed-effects model to estimate data from ICC of repeated measurement and from the two-stage sampling. Then, we use Bootstrap method to estimate CI from related ICCs. Finally, the influences of different Bootstraping strategies to ICC's CIs are compared. The repeated measurement instance show that the CI of cluster Bootsraping containing the true ICC value. However, when ignoring the hierarchy characteristics of data, the random Bootsraping method shows that it has the invalid CI. Result from the two-stage instance shows that bias observed between cluster Bootstraping's ICC means while the ICC of the original sample is the smallest, but with wide CI. It is necessary to consider the structure of data as important, when hierarchical data is being resampled. Bootstrapping seems to be better on the higher than that on lower levels.

  7. Quantitative polyacrylamide gel electrophoresis and specific activities of human somatotropin and its derivatives.

    PubMed

    Skyler, J S; Chrambach, A; Li, C H

    1977-04-25

    A preparation of human pituitary somatotropin examined in quantitative polyacrylamide gel electrophoresis is conformationally compact. The derived molecular weight by quantitative electrophoresis is consistant with the known mass of the hormone and value obtained by other methods. A plasmin-modified somatotropin is more compact and shows full activity in immunoassay, and in lymphocyte binding and somatotropic assays, with enhanced lactogenic activity. The N-terminal 134-amino acid fragment and a hendekakaihekaton fragment exist in non-monomeric forms. The N-terminal fragment has immunologic and biologic activity, with greatest activity in the in vivo somatotropic assay. The hendekakaihekaton fragment exhibited only marginal activity. All preparations showed heterogeneity of charge in quantitative electrophoresis with discreet charge isomerism recognizable for the native and plasmin-modified preparations.

  8. Estimating evaporative vapor generation from automobiles based on parking activities.

    PubMed

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S

    2015-07-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity.

  9. Total myrosinase activity estimates in brassica vegetable produce.

    PubMed

    Dosz, Edward B; Ku, Kang-Mo; Juvik, John A; Jeffery, Elizabeth H

    2014-08-13

    Isothiocyanates, generated from the hydrolysis of glucosinolates in plants of the Brassicaceae family, promote health, including anticancer bioactivity. Hydrolysis requires the plant enzyme myrosinase, giving myrosinase a key role in health promotion by brassica vegetables. Myrosinase measurement typically involves isolating crude protein, potentially underestimating activity in whole foods. Myrosinase activity was estimated using unextracted fresh tissues of five broccoli and three kale cultivars, measuring the formation of allyl isothiocyanate (AITC) and/or glucose from exogenous sinigrin. A correlation between AITC and glucose formation was found, although activity was substantially lower measured as glucose release. Using exogenous sinigrin or endogenous glucoraphanin, concentrations of the hydrolysis products AITC and sulforaphane correlated (r = 0.859; p = 0.006), suggesting that broccoli shows no myrosinase selectivity among sinigrin and glucoraphanin. Measurement of AITC formation provides a novel, reliable estimation of myrosinase-dependent isothiocyanate formation suitable for use with whole vegetable food samples.

  10. Using multiple linear regression model to estimate thunderstorm activity

    NASA Astrophysics Data System (ADS)

    Suparta, W.; Putro, W. S.

    2017-03-01

    This paper is aimed to develop a numerical model with the use of a nonlinear model to estimate the thunderstorm activity. Meteorological data such as Pressure (P), Temperature (T), Relative Humidity (H), cloud (C), Precipitable Water Vapor (PWV), and precipitation on a daily basis were used in the proposed method. The model was constructed with six configurations of input and one target output. The output tested in this work is the thunderstorm event when one-year data is used. Results showed that the model works well in estimating thunderstorm activities with the maximum epoch reaching 1000 iterations and the percent error was found below 50%. The model also found that the thunderstorm activities in May and October are detected higher than the other months due to the inter-monsoon season.

  11. Quantitative estimates of tropical temperature change in lowland Central America during the last 42 ka

    NASA Astrophysics Data System (ADS)

    Grauel, Anna-Lena; Hodell, David A.; Bernasconi, Stefano M.

    2016-03-01

    Determining the magnitude of tropical temperature change during the last glacial period is a fundamental problem in paleoclimate research. Large discrepancies exist in estimates of tropical cooling inferred from marine and terrestrial archives. Here we present a reconstruction of temperature for the last 42 ka from a lake sediment core from Lake Petén Itzá, Guatemala, located at 17°N in lowland Central America. We compared three independent methods of glacial temperature reconstruction: pollen-based temperature estimates, tandem measurements of δ18O in biogenic carbonate and gypsum hydration water, and clumped isotope thermometry. Pollen provides a near-continuous record of temperature change for most of the glacial period but the occurrence of a no-analog pollen assemblage during cold, dry stadials renders temperature estimates unreliable for these intervals. In contrast, the gypsum hydration and clumped isotope methods are limited mainly to the stadial periods when gypsum and biogenic carbonate co-occur. The combination of palynological and geochemical methods leads to a continuous record of tropical temperature change in lowland Central America over the last 42 ka. Furthermore, the gypsum hydration water method and clumped isotope thermometry provide independent estimates of not only temperature, but also the δ18O of lake water that is dependent on the hydrologic balance between evaporation and precipitation over the lake surface and its catchment. The results show that average glacial temperature was cooler in lowland Central America by 5-10 °C relative to the Holocene. The coldest and driest times occurred during North Atlantic stadial events, particularly Heinrich stadials (HSs), when temperature decreased by up to 6 to 10 °C relative to today. This magnitude of cooling is much greater than estimates derived from Caribbean marine records and model simulations. The extreme dry and cold conditions during HSs in the lowland Central America were associated

  12. Towards a quantitative kinetic theory of polar active matter

    NASA Astrophysics Data System (ADS)

    Ihle, T.

    2014-06-01

    A recent kinetic approach for Vicsek-like models of active particles is reviewed. The theory is based on an exact Chapman- Kolmogorov equation in phase space. It can handle discrete time dynamics and "exotic" multi-particle interactions. A nonlocal mean-field theory for the one-particle distribution function is obtained by assuming molecular chaos. The Boltzmann approach of Bertin, et al., Phys. Rev. E 74, 022101 (2006) and J. Phys. A 42, 445001 (2009), is critically assessed and compared to the current approach. In Boltzmann theory, a collision starts when two particles enter each others action spheres and is finished when their distance exceeds the interaction radius. The average duration of such a collision, τ0, is measured for the Vicsek model with continuous time-evolution. If the noise is chosen to be close to the flocking threshold, the average time between collisions is found to be roughly equal to τ0 at low densities. Thus, the continuous-time Vicsek-model near the flocking threshold cannot be accurately described by a Boltzmann equation, even at very small density because collisions take so long that typically other particles join in, rendering Boltzmann's binary collision assumption invalid. Hydrodynamic equations for the phase space approach are derived by means of a Chapman-Enskog expansion. The equations are compared to the Toner-Tu theory of polar active matter. New terms, absent in the Toner-Tu theory, are highlighted. Convergence problems of Chapman-Enskog and similar gradient expansions are discussed.

  13. Quantitative Estimate of the Relation Between Rolling Resistance on Fuel Consumption of Class 8 Tractor Trailers Using Both New and Retreaded Tires (SAE Paper 2014-01-2425)

    EPA Science Inventory

    Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.

  14. Quantitative Estimate of the Relation Between Rolling Resistance on Fuel Consumption of Class 8 Tractor Trailers Using Both New and Retreaded Tires (SAE Paper 2014-01-2425)

    EPA Science Inventory

    Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.

  15. Quantifying the extent of emphysema: factors associated with radiologists' estimations and quantitative indices of emphysema severity using the ECLIPSE cohort.

    PubMed

    Gietema, Hester A; Müller, Nestor L; Fauerbach, Paola V Nasute; Sharma, Sanjay; Edwards, Lisa D; Camp, Pat G; Coxson, Harvey O

    2011-06-01

    This study investigated what factors radiologists take into account when estimating emphysema severity and assessed quantitative computed tomography (CT) measurements of low attenuation areas. CT scans and spirometry were obtained on 1519 chronic obstructive pulmonary disease (COPD) subjects, 269 smoker controls, and 184 nonsmoker controls from the Evaluation of COPD Longitudinally to Indentify Surrogate Endpoints (ECLIPSE) study. CT scans were analyzed using the threshold technique (%<-950HU) and a low attenuation cluster analysis. Two radiologists scored emphysema severity (0 to 5 scale), described the predominant type and distribution of emphysema, and the presence of suspected small airways disease. The percent low attenuation area (%LAA) and visual scores of emphysema severity correlated well (r = 0.77, P < .001). %LAA, low attenuation cluster analysis, and absence of radiologist described gas trapping, distribution, and predominant type of emphysema were predictors of visual scores of emphysema severity (all P < .001). CT scans scored as showing regions of gas trapping had smaller lesions for a similar %LAA than those without (P < .001). Visual estimates of emphysema are not only determined by the extent of LAA, but also by lesion size, predominant type, and distribution of emphysema and presence/absence of areas of small airways disease. A computer analysis of low attenuation cluster size helps quantitative algorithms discriminate low attenuation areas from gas trapping, image noise, and emphysema. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  16. Comparison of Myocardial Perfusion Estimates From Dynamic Contrast-Enhanced Magnetic Resonance Imaging With Four Quantitative Analysis Methods

    PubMed Central

    Pack, Nathan A.; DiBella, Edward V. R.

    2012-01-01

    Dynamic contrast-enhanced MRI has been used to quantify myocardial perfusion in recent years. Published results have varied widely, possibly depending on the method used to analyze the dynamic perfusion data. Here, four quantitative analysis methods (two-compartment modeling, Fermi function modeling, model-independent analysis, and Patlak plot analysis) were implemented and compared for quantifying myocardial perfusion. Dynamic contrast-enhanced MRI data were acquired in 20 human subjects at rest with low-dose (0.019 ± 0.005 mmol/kg) bolus injections of gadolinium. Fourteen of these subjects were also imaged at adenosine stress (0.021 ± 0.005 mmol/kg). Aggregate rest perfusion estimates were not significantly different between all four analysis methods. At stress, perfusion estimates were not significantly different between two-compartment modeling, model-independent analysis, and Patlak plot analysis. Stress estimates from the Fermi model were significantly higher (~20%) than the other three methods. Myocardial perfusion reserve values were not significantly different between all four methods. Model-independent analysis resulted in the lowest model curve-fit errors. When more than just the first pass of data was analyzed, perfusion estimates from two-compartment modeling and model-independent analysis did not change significantly, unlike results from Fermi function modeling. PMID:20577976

  17. Quantitative assessment of soil parameter (KD and TC) estimation using DGT measurements and the 2D DIFS model.

    PubMed

    Lehto, N J; Sochaczewski, L; Davison, W; Tych, W; Zhang, H

    2008-03-01

    Diffusive gradients in thin films (DGT) is a dynamic, in situ measuring technique that can be used to supply diverse information on concentrations and behaviour of solutes. When deployed in soils and sediments, quantitative interpretation of DGT measurements requires the use of a numerical model. An improved version of the DGT induced fluxes in soils and sediments model (DIFS), working in two dimensions (2D DIFS), was used to investigate the accuracy with which DGT measurements can be used to estimate the distribution coefficient for labile metal (KD) and the response time of the soil to depletion (TC). The 2D DIFS model was used to obtain values of KD and TC for Cd, Zn and Ni in three different soils, which were compared to values determined previously using 1D DIFS for these cases. While the 1D model was shown to provide reasonable estimates of KD, the 2D model refined the estimates of the kinetic parameters. Desorption rate constants were shown to be similar for all three metals and lower than previously thought. Calculation of an error function as KD and TC are systematically varied showed the spread of KD and TC values that fit the experimental data equally well. These automatically generated error maps reflected the quality of the data and provided an appraisal of the accuracy of parameter estimation. They showed that in some cases parameter accuracy could be improved by fitting the model to a sub-set of data.

  18. Quantitative PCR-based genome size estimation of the astigmatid mites Sarcoptes scabiei, Psoroptes ovis and Dermatophagoides pteronyssinus

    PubMed Central

    2012-01-01

    Background The lack of genomic data available for mites limits our understanding of their biology. Evolving high-throughput sequencing technologies promise to deliver rapid advances in this area, however, estimates of genome size are initially required to ensure sufficient coverage. Methods Quantitative real-time PCR was used to estimate the genome sizes of the burrowing ectoparasitic mite Sarcoptes scabiei, the non-burrowing ectoparasitic mite Psoroptes ovis, and the free-living house dust mite Dermatophagoides pteronyssinus. Additionally, the chromosome number of S. scabiei was determined by chromosomal spreads of embryonic cells derived from single eggs. Results S. scabiei cells were shown to contain 17 or 18 small (< 2 μM) chromosomes, suggesting an XO sex-determination mechanism. The average estimated genome sizes of S. scabiei and P. ovis were 96 (± 7) Mb and 86 (± 2) Mb respectively, among the smallest arthropod genomes reported to date. The D. pteronyssinus genome was estimated to be larger than its parasitic counterparts, at 151 Mb in female mites and 218 Mb in male mites. Conclusions This data provides a starting point for understanding the genetic organisation and evolution of these astigmatid mites, informing future sequencing projects. A comparitive genomic approach including these three closely related mites is likely to reveal key insights on mite biology, parasitic adaptations and immune evasion. PMID:22214472

  19. Quantitative PCR-based genome size estimation of the astigmatid mites Sarcoptes scabiei, Psoroptes ovis and Dermatophagoides pteronyssinus.

    PubMed

    Mounsey, Kate E; Willis, Charlene; Burgess, Stewart T G; Holt, Deborah C; McCarthy, James; Fischer, Katja

    2012-01-04

    The lack of genomic data available for mites limits our understanding of their biology. Evolving high-throughput sequencing technologies promise to deliver rapid advances in this area, however, estimates of genome size are initially required to ensure sufficient coverage. Quantitative real-time PCR was used to estimate the genome sizes of the burrowing ectoparasitic mite Sarcoptes scabiei, the non-burrowing ectoparasitic mite Psoroptes ovis, and the free-living house dust mite Dermatophagoides pteronyssinus. Additionally, the chromosome number of S. scabiei was determined by chromosomal spreads of embryonic cells derived from single eggs. S. scabiei cells were shown to contain 17 or 18 small (< 2 μM) chromosomes, suggesting an XO sex-determination mechanism. The average estimated genome sizes of S. scabiei and P. ovis were 96 (± 7) Mb and 86 (± 2) Mb respectively, among the smallest arthropod genomes reported to date. The D. pteronyssinus genome was estimated to be larger than its parasitic counterparts, at 151 Mb in female mites and 218 Mb in male mites. This data provides a starting point for understanding the genetic organisation and evolution of these astigmatid mites, informing future sequencing projects. A comparitive genomic approach including these three closely related mites is likely to reveal key insights on mite biology, parasitic adaptations and immune evasion. © 2012 Mounsey et al; licensee BioMed Central Ltd.

  20. Estimates of the global electric circuit from global thunderstorm activity

    NASA Astrophysics Data System (ADS)

    Hutchins, M. L.; Holzworth, R. H.; Brundell, J. B.

    2013-12-01

    The World Wide Lightning Location Network (WWLLN) has a global detection efficiency around 10%, however the network has been shown to identify 99% of thunderstorms (Jacobson, et al 2006, using WWLLN data from 2005). To create an estimate of the global electric circuit activity a clustering algorithm is applied to the WWLLN dataset to identify global thunderstorms from 2009 - 2013. The annual, seasonal, and regional thunderstorm activity is investigated with this new WWLLN thunderstorm dataset in order to examine the source behavior of the global electric circuit. From the clustering algorithm the total number of active thunderstorms is found every 30 minutes to create a measure of the global electric circuit source function. The clustering algorithm used is shown to be robust over parameter ranges related to real physical storm sizes and times. The thunderstorm groupings are verified with case study comparisons using satellite and radar data. It is found that there are on average 714 × 81 thunderstorms active at any given time. Similarly the highest average number of thunderstorms occurs in July (783 × 69) with the lowest in January (599 × 76). The annual and diurnal thunderstorm activity seen with the WWLLN thunderstorms is in contrast with the bimodal stroke activity seen by WWLLN. Through utilizing the global coverage and high time resolution of WWLLN, it is shown that the total active thunderstorm count is less than previous estimates based on compiled climatologies.

  1. The Use of Multi-Sensor Quantitative Precipitation Estimates for Deriving Extreme Precipitation Frequencies with Application in Louisiana

    NASA Astrophysics Data System (ADS)

    El-Dardiry, Hisham Abd El-Kareem

    The Radar-based Quantitative Precipitation Estimates (QPE) is one of the NEXRAD products that are available in a high temporal and spatial resolution compared with gauges. Radar-based QPEs have been widely used in many hydrological and meteorological applications; however, a few studies have focused on using radar QPE products in deriving of Precipitation Frequency Estimates (PFE). Accurate and regionally specific information on PFE is critically needed for various water resources engineering planning and design purposes. This study focused first on examining the data quality of two main radar products, the near real-time Stage IV QPE product, and the post real-time RFC/MPE product. Assessment of the Stage IV product showed some alarming data artifacts that contaminate the identification of rainfall maxima. Based on the inter-comparison analysis of the two products, Stage IV and RFC/MPE, the latter was selected for the frequency analysis carried out throughout the study. The precipitation frequency analysis approach used in this study is based on fitting Generalized Extreme Value (GEV) distribution as a statistical model for the hydrologic extreme rainfall data that based on Annual Maximum Series (AMS) extracted from 11 years (2002-2012) over a domain covering Louisiana. The parameters of the GEV model are estimated using method of linear moments (L-moments). Two different approaches are suggested for estimating the precipitation frequencies; Pixel-Based approach, in which PFEs are estimated at each individual pixel and Region-Based approach in which a synthetic sample is generated at each pixel by using observations from surrounding pixels. The region-based technique outperforms the pixel based estimation when compared with results obtained by NOAA Atlas 14; however, the availability of only short record of observations and the underestimation of radar QPE for some extremes causes considerable reduction in precipitation frequencies in pixel-based and region

  2. Estimation of quantitative levels of diesel exhaust exposure and the health impact in the contemporary Australian mining industry.

    PubMed

    Peters, Susan; de Klerk, Nicholas; Reid, Alison; Fritschi, Lin; Musk, Aw Bill; Vermeulen, Roel

    2017-03-01

    To estimate quantitative levels of exposure to diesel exhaust expressed by elemental carbon (EC) in the contemporary mining industry and to describe the excess risk of lung cancer that may result from those levels. EC exposure has been monitored in Western Australian miners since 2003. Mixed-effects models were used to estimate EC levels for five surface and five underground occupation groups (as a fixed effect) and specific jobs within each group (as a random effect). Further fixed effects included sampling year and duration, and mineral mined. On the basis of published risk functions, we estimated excess lifetime risk of lung cancer mortality for several employment scenarios. Personal EC measurements (n=8614) were available for 146 different jobs at 124 mine sites. The mean estimated EC exposure level for surface occupations in 2011 was 14 µg/m(3) for 12 hour shifts. Levels for underground occupation groups ranged from 18 to 44 µg/m(3). Underground diesel loader operators had the highest exposed specific job: 59 µg/m(3). A lifetime career (45 years) as a surface worker or underground miner, experiencing exposure levels as estimated for 2011 (14 and 44 µg/m(3) EC), was associated with 5.5 and 38 extra lung cancer deaths per 1000 males, respectively. EC exposure levels in the contemporary Australian mining industry are still substantial, particularly for underground workers. The estimated excess numbers of lung cancer deaths associated with these exposures support the need for implementation of stringent occupational exposure limits for diesel exhaust. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  3. Comparison of Multiple Quantitative Precipitation Estimates for Warm-Season Flood Forecasting in the Colorado Front Range

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Vivoni, E. R.; Gochis, D. J.

    2010-12-01

    Quantitative Precipitation Estimates (QPEs) from ground and satellite platforms can potentially serve as input to hydrologic models used for flood forecasting in mountainous watersheds. This work compares the impact of ten different high-resolution (4-km and hourly) precipitation products on flood forecast skill in a large region of the Colorado Front Range. These products range from radar fields (Level II, Stage III and IV) to satellite estimates (HydroEstimator, AutoEstimator, Blend, GMSRA, PERSIANN-CCS). We examine QPE skill relative to ground rain gauges to detect error characteristics during the 2004 summer season which exhibited above-average precipitation accumulations in the region. We then quantify flood forecast skill by using the TIN-based Real time Integrated Basin Simulator (tRIBS) as an analysis tool in four mountain basins. The structural features of radar and satellite precipitation products determine the timing and magnitude of simulated summer floods in the study basins. Use of ground-based radar and multi-sensor satellite estimates minimize streamflow differences at the outlet locations compared to satellite-only QPEs which tend to underestimate total rainfall volumes, resulting in significant hydrologic response uncertainties. Given the generally low rainfall estimates from satellite-only products, a mean field bias correction is applied to all products and results are compared against non-corrected precipitation products. An exploratory analysis is conducted to assess precipitation volume differences between the bias-corrected and raw satellite products. Probability density functions of the differences allow examining the links between QPE bias, the diurnal precipitation cycle and topographic position. Analysis of the spatiotemporal precipitation and streamflow patterns help identify benefits and shortcomings of high-resolution QPEs for summer storms in mountainous areas.

  4. Noise estimation from averaged diffusion weighted images: Can unbiased quantitative decay parameters assist cancer evaluation?

    PubMed Central

    Dikaios, Nikolaos; Punwani, Shonit; Hamy, Valentin; Purpura, Pierpaolo; Rice, Scott; Forster, Martin; Mendes, Ruheena; Taylor, Stuart; Atkinson, David

    2014-01-01

    Purpose Multiexponential decay parameters are estimated from diffusion-weighted-imaging that generally have inherently low signal-to-noise ratio and non-normal noise distributions, especially at high b-values. Conventional nonlinear regression algorithms assume normally distributed noise, introducing bias into the calculated decay parameters and potentially affecting their ability to classify tumors. This study aims to accurately estimate noise of averaged diffusion-weighted-imaging, to correct the noise induced bias, and to assess the effect upon cancer classification. Methods A new adaptation of the median-absolute-deviation technique in the wavelet-domain, using a closed form approximation of convolved probability-distribution-functions, is proposed to estimate noise. Nonlinear regression algorithms that account for the underlying noise (maximum probability) fit the biexponential/stretched exponential decay models to the diffusion-weighted signal. A logistic-regression model was built from the decay parameters to discriminate benign from metastatic neck lymph nodes in 40 patients. Results The adapted median-absolute-deviation method accurately predicted the noise of simulated (R2 = 0.96) and neck diffusion-weighted-imaging (averaged once or four times). Maximum probability recovers the true apparent-diffusion-coefficient of the simulated data better than nonlinear regression (up to 40%), whereas no apparent differences were found for the other decay parameters. Conclusions Perfusion-related parameters were best at cancer classification. Noise-corrected decay parameters did not significantly improve classification for the clinical data set though simulations show benefit for lower signal-to-noise ratio acquisitions. PMID:23913479

  5. Quantitative estimation of IgE and IgD by laser nephelometry.

    PubMed

    Bergmann, K C; Crisci, C D; Jinnouchi, H; Oehling, A

    1979-01-01

    The advantages and disadvantages of Laser Nephelometry (LN) in the determination of IgD and IgE are reported. Two laser nephelometer models (Behringwerke/Marburg), different batches of LN cuvettes, WHO reference standard sera, rabbit anti-human antisera and randomly selected allergic patients' sera were used for the standardization of the method. Cuvette blank values were significantly lower in the new model of laser nephelometer and the precision of these measurements was very high when two different cuvette charges were compared. In the determination of IgE by LN, it was possible to detect levels down to 125 IU/ml, the accuracy of the estimations varying between 4.8 and 8.2% and the repeatability between 3.2 and 24.4%, the highest variation coefficient being obtained in low level samples. The overall agreement between LN and RIST in 55 serum samples was 71%, and at concentrations below 200 IU/ml (normal) and above 400 IU/ml (increased) 80% and 85% respectively. In the determination of IgD by LN, the accuracy of the estimations was also very good (2.4 to 7.8%) and the variation coefficient varied between 2.8 and 13.3%. In the comparison of IgD estimations with LN and radial immunodiffusion in 27 samples a correlation coefficient of r = 0.82 was obtained. Although normal adult IgE values cannot be analysed, the clinically important increased IgE levels are correctly determined by LN. The method is more sensitive than the Mancini technique for IgE determination and in comparison with RIST, though low values are not obtained, LN is quicker, simpler and cheaper.

  6. Long-term accounting for raindrop size distribution variations improves quantitative precipitation estimation by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2016-04-01

    Weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources. The current study is focused on the impact of variations of the raindrop size distribution on radar rainfall estimates. Such variations lead to errors in the estimated rainfall intensity (R) and specific attenuation (k) when using fixed relations for the conversion of the observed reflectivity (Z) into R and k. For non-polarimetric radar, this error source has received relatively little attention compared to other error sources. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed in The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. This especially holds for situations where widespread stratiform precipitation is observed. The best results are obtained when the DSD parameters are optimized. However, the optimized Z-R and Z-k relations show an unrealistic variability that arises from uncorrected error sources. As such, the optimization approach does not result in a realistic DSD shape but instead also accounts for uncorrected error sources resulting in the best radar rainfall adjustment. Therefore, to further improve the quality of preciptitation estimates by weather radar, usage should either be made of polarimetric radar or by extending the network of disdrometers.

  7. Quantitative estimation of the high-intensity zone in the lumbar spine: comparison between the symptomatic and asymptomatic population.

    PubMed

    Liu, Chao; Cai, Hong-Xin; Zhang, Jian-Feng; Ma, Jian-Jun; Lu, Yin-Jiang; Fan, Shun-Wu

    2014-03-01

    The high-intensity zone (HIZ) on magnetic resonance imaging (MRI) has been studied for more than 20 years, but its diagnostic value in low back pain (LBP) is limited by the high incidence in asymptomatic subjects. Little effort has been made to improve the objective assessment of HIZ. To develop quantitative measurements for HIZ and estimate intra- and interobserver reliability and to clarify different signal intensity of HIZ in patients with or without LBP. A measurement reliability and prospective comparative study. A consecutive series of patients with LBP between June 2010 and May 2011 (group A) and a successive series of asymptomatic controls during the same period (group B). Incidence of HIZ; quantitative measures, including area of disc, area and signal intensity of HIZ, and magnetic resonance imaging index; and intraclass correlation coefficients (ICCs) for intra- and interobserver reliability. On the basis of HIZ criteria, a series of quantitative dimension and signal intensity measures was developed for assessing HIZ. Two experienced spine surgeons traced the region of interest twice within 4 weeks for assessment of the intra- and interobserver reliability. The quantitative variables were compared between groups A and B. There were 72 patients with LBP and 79 asymptomatic controls enrolling in this study. The prevalence of HIZ in group A and group B was 45.8% and 20.2%, respectively. The intraobserver agreement was excellent for the quantitative measures (ICC=0.838-0.977) as well as interobserver reliability (ICC=0.809-0.935). The mean signal of HIZ in group A was significantly brighter than in group B (57.55±14.04% vs. 45.61±7.22%, p=.000). There was no statistical difference of area of disc and HIZ between the two groups. The magnetic resonance imaging index was found to be higher in group A when compared with group B (3.94±1.71 vs. 3.06±1.50), but with a p value of .050. A series of quantitative measurements for HIZ was established and demonstrated

  8. An efficient automatic workload estimation method based on electrodermal activity using pattern classifier combinations.

    PubMed

    Ghaderyan, Peyvand; Abbasi, Ataollah

    2016-12-01

    Automatic workload estimation has received much attention because of its application in error prevention, diagnosis, and treatment of neural system impairment. The development of a simple but reliable method using minimum number of psychophysiological signals is a challenge in automatic workload estimation. To address this challenge, this paper presented three different decomposition techniques (Fourier, cepstrum, and wavelet transforms) to analyze electrodermal activity (EDA). The efficiency of various statistical and entropic features was investigated and compared. To recognize different levels of an arithmetic task, the features were processed by principal component analysis and machine-learning techniques. These methods have been incorporated into a workload estimation system based on two types: feature-level and decision-level combinations. The results indicated the reliability of the method for automatic and real-time inference of psychological states. This method provided a quantitative estimation of the workload levels and a bias-free evaluation approach. The high-average accuracy of 90% and cost effective requirement were the two important attributes of the proposed workload estimation system. New entropic features were proved to be more sensitive measures for quantifying time and frequency changes in EDA. The effectiveness of these measures was also compared with conventional tonic EDA measures, demonstrating the superiority of the proposed method in achieving accurate estimation of workload levels.

  9. Quantitative estimation of temperature variations in plantar angiosomes: a study case for diabetic foot.

    PubMed

    Peregrina-Barreto, H; Morales-Hernandez, L A; Rangel-Magdaleno, J J; Avina-Cervantes, J G; Ramirez-Cortes, J M; Morales-Caporal, R

    2014-01-01

    Thermography is a useful tool since it provides information that may help in the diagnostic of several diseases in a noninvasive and fast way. Particularly, thermography has been applied in the study of the diabetic foot. However, most of these studies report only qualitative information making it difficult to measure significant parameters such as temperature variations. These variations are important in the analysis of the diabetic foot since they could bring knowledge, for instance, regarding ulceration risks. The early detection of ulceration risks is considered an important research topic in the medicine field, as its objective is to avoid major complications that might lead to a limb amputation. The absence of symptoms in the early phase of the ulceration is conceived as the main disadvantage to provide an opportune diagnostic in subjects with neuropathy. Since the relation between temperature and ulceration risks is well established in the literature, a methodology that obtains quantitative temperature differences in the plantar area of the diabetic foot to detect ulceration risks is proposed in this work. Such methodology is based on the angiosome concept and image processing.

  10. Quantitative Estimation of Temperature Variations in Plantar Angiosomes: A Study Case for Diabetic Foot

    PubMed Central

    Peregrina-Barreto, H.; Morales-Hernandez, L. A.; Rangel-Magdaleno, J. J.; Avina-Cervantes, J. G.; Ramirez-Cortes, J. M.; Morales-Caporal, R.

    2014-01-01

    Thermography is a useful tool since it provides information that may help in the diagnostic of several diseases in a noninvasive and fast way. Particularly, thermography has been applied in the study of the diabetic foot. However, most of these studies report only qualitative information making it difficult to measure significant parameters such as temperature variations. These variations are important in the analysis of the diabetic foot since they could bring knowledge, for instance, regarding ulceration risks. The early detection of ulceration risks is considered an important research topic in the medicine field, as its objective is to avoid major complications that might lead to a limb amputation. The absence of symptoms in the early phase of the ulceration is conceived as the main disadvantage to provide an opportune diagnostic in subjects with neuropathy. Since the relation between temperature and ulceration risks is well established in the literature, a methodology that obtains quantitative temperature differences in the plantar area of the diabetic foot to detect ulceration risks is proposed in this work. Such methodology is based on the angiosome concept and image processing. PMID:24688595

  11. Raman spectroscopy of human skin: looking for a quantitative algorithm to reliably estimate human age

    NASA Astrophysics Data System (ADS)

    Pezzotti, Giuseppe; Boffelli, Marco; Miyamori, Daisuke; Uemura, Takeshi; Marunaka, Yoshinori; Zhu, Wenliang; Ikegaya, Hiroshi

    2015-06-01

    The possibility of examining soft tissues by Raman spectroscopy is challenged in an attempt to probe human age for the changes in biochemical composition of skin that accompany aging. We present a proof-of-concept report for explicating the biophysical links between vibrational characteristics and the specific compositional and chemical changes associated with aging. The actual existence of such links is then phenomenologically proved. In an attempt to foster the basics for a quantitative use of Raman spectroscopy in assessing aging from human skin samples, a precise spectral deconvolution is performed as a function of donors' ages on five cadaveric samples, which emphasizes the physical significance and the morphological modifications of the Raman bands. The outputs suggest the presence of spectral markers for age identification from skin samples. Some of them appeared as authentic "biological clocks" for the apparent exactness with which they are related to age. Our spectroscopic approach yields clear compositional information of protein folding and crystallization of lipid structures, which can lead to a precise identification of age from infants to adults. Once statistically validated, these parameters might be used to link vibrational aspects at the molecular scale for practical forensic purposes.

  12. Quantitative estimation of channeling from early glycolytic intermediates to CO in intact Escherichia coli.

    PubMed

    Shearer, Georgia; Lee, Jennifer C; Koo, Jia-An; Kohl, Daniel H

    2005-07-01

    A pathway intermediate is said to be 'channeled' when an intermediate just made in a pathway has a higher probability of being a substrate for the next pathway enzyme compared with a molecule of the same species from the aqueous cytoplasm. Channeling is an important phenomenon because it might play a significant role in the regulation of metabolism. Whereas the usual mechanism proposed for channeling is the (often) transient interaction of sequential pathway enzymes, many of the supporting data come from results with pure enzymes and dilute cell extracts. Even when isotope dilution techniques have utilized whole-cell systems, most often only a qualitative assessment of channeling has been reported. Here we develop a method for making a quantitative calculation of the fraction channeled in glycolysis from in vivo isotope dilution experiments. We show that fructose-1,6-bisphosphate, in whole cells of Escherichia coli, was strongly channeled all the way to CO2, whereas fructose-6-phosphate was not. Because the signature of channeling is lost if any downstream intermediate prior to CO2 equilibrates with molecules in the aqueous cytosol, it was not possible to evaluate whether glucose-6-phosphate was channeled in its transformation to fructose-6-phosphate. The data also suggest that, in addition to pathway enzymes being associated with one another, some are free in the aqueous cytosol. How sensitive the degree of channeling is to growth or experimental conditions remains to be determined.

  13. Theoretical framework for quantitatively estimating ultrasound beam intensities using infrared thermography.

    PubMed

    Myers, Matthew R; Giridhar, Dushyanth

    2011-06-01

    In the characterization of high-intensity focused ultrasound (HIFU) systems, it is desirable to know the intensity field within a tissue phantom. Infrared (IR) thermography is a potentially useful method for inferring this intensity field from the heating pattern within the phantom. However, IR measurements require an air layer between the phantom and the camera, making inferences about the thermal field in the absence of the air complicated. For example, convection currents can arise in the air layer and distort the measurements relative to the phantom-only situation. Quantitative predictions of intensity fields based upon IR temperature data are also complicated by axial and radial diffusion of heat. In this paper, mathematical expressions are derived for use with IR temperature data acquired at times long enough that noise is a relatively small fraction of the temperature trace, but small enough that convection currents have not yet developed. The relations were applied to simulated IR data sets derived from computed pressure and temperature fields. The simulation was performed in a finite-element geometry involving a HIFU transducer sonicating upward in a phantom toward an air interface, with an IR camera mounted atop an air layer, looking down at the heated interface. It was found that, when compared to the intensity field determined directly from acoustic propagation simulations, intensity profiles could be obtained from the simulated IR temperature data with an accuracy of better than 10%, at pre-focal, focal, and post-focal locations. © 2011 Acoustical Society of America

  14. Quantitatively estimating defects in graphene devices using discharge current analysis method

    PubMed Central

    Jung, Ukjin; Lee, Young Gon; Kang, Chang Goo; Lee, Sangchul; Kim, Jin Ju; Hwang, Hyeon June; Lim, Sung Kwan; Ham, Moon-Ho; Lee, Byoung Hun

    2014-01-01

    Defects of graphene are the most important concern for the successful applications of graphene since they affect device performance significantly. However, once the graphene is integrated in the device structures, the quality of graphene and surrounding environment could only be assessed using indirect information such as hysteresis, mobility and drive current. Here we develop a discharge current analysis method to measure the quality of graphene integrated in a field effect transistor structure by analyzing the discharge current and examine its validity using various device structures. The density of charging sites affecting the performance of graphene field effect transistor obtained using the discharge current analysis method was on the order of 1014/cm2, which closely correlates with the intensity ratio of the D to G bands in Raman spectroscopy. The graphene FETs fabricated on poly(ethylene naphthalate) (PEN) are found to have a lower density of charging sites than those on SiO2/Si substrate, mainly due to reduced interfacial interaction between the graphene and the PEN. This method can be an indispensable means to improve the stability of devices using a graphene as it provides an accurate and quantitative way to define the quality of graphene after the device fabrication. PMID:24811431

  15. Quantitatively estimating defects in graphene devices using discharge current analysis method.

    PubMed

    Jung, Ukjin; Lee, Young Gon; Kang, Chang Goo; Lee, Sangchul; Kim, Jin Ju; Hwang, Hyeon June; Lim, Sung Kwan; Ham, Moon-Ho; Lee, Byoung Hun

    2014-05-08

    Defects of graphene are the most important concern for the successful applications of graphene since they affect device performance significantly. However, once the graphene is integrated in the device structures, the quality of graphene and surrounding environment could only be assessed using indirect information such as hysteresis, mobility and drive current. Here we develop a discharge current analysis method to measure the quality of graphene integrated in a field effect transistor structure by analyzing the discharge current and examine its validity using various device structures. The density of charging sites affecting the performance of graphene field effect transistor obtained using the discharge current analysis method was on the order of 10(14)/cm(2), which closely correlates with the intensity ratio of the D to G bands in Raman spectroscopy. The graphene FETs fabricated on poly(ethylene naphthalate) (PEN) are found to have a lower density of charging sites than those on SiO2/Si substrate, mainly due to reduced interfacial interaction between the graphene and the PEN. This method can be an indispensable means to improve the stability of devices using a graphene as it provides an accurate and quantitative way to define the quality of graphene after the device fabrication.

  16. Quantitatively estimating defects in graphene devices using discharge current analysis method

    NASA Astrophysics Data System (ADS)

    Jung, Ukjin; Lee, Young Gon; Kang, Chang Goo; Lee, Sangchul; Kim, Jin Ju; Hwang, Hyeon June; Lim, Sung Kwan; Ham, Moon-Ho; Lee, Byoung Hun

    2014-05-01

    Defects of graphene are the most important concern for the successful applications of graphene since they affect device performance significantly. However, once the graphene is integrated in the device structures, the quality of graphene and surrounding environment could only be assessed using indirect information such as hysteresis, mobility and drive current. Here we develop a discharge current analysis method to measure the quality of graphene integrated in a field effect transistor structure by analyzing the discharge current and examine its validity using various device structures. The density of charging sites affecting the performance of graphene field effect transistor obtained using the discharge current analysis method was on the order of 1014/cm2, which closely correlates with the intensity ratio of the D to G bands in Raman spectroscopy. The graphene FETs fabricated on poly(ethylene naphthalate) (PEN) are found to have a lower density of charging sites than those on SiO2/Si substrate, mainly due to reduced interfacial interaction between the graphene and the PEN. This method can be an indispensable means to improve the stability of devices using a graphene as it provides an accurate and quantitative way to define the quality of graphene after the device fabrication.

  17. Quantitative rules for lymphocyte regulation: the cellular calculus and decisions between tolerance and activation.

    PubMed

    Hodgkin, P D

    2005-10-01

    The innovation of fluorescent division-tracking techniques has elevated our understanding of lymphocytes to a new level. These techniques applied in vitro have identified quantitative rules for lymphocyte differentiation, proliferation and survival that had previously been hidden. The many patterns of quantitative response revealed by these analyses provide a sharp contrast to the traditional idea that T cells must make a binary choice between tolerance and activation. Here, evidence for the classic dogma of two-signal theory and T-T help is re-examined in the light of the new quantitative view to show how logical difficulties can be resolved.

  18. Geosynchronous SAR Orbit Estimation Based on Active Radar Calibrators

    NASA Astrophysics Data System (ADS)

    Leanza, Antonio; Monti Guarnieri, Andrea; Boroquets Ibars, Antoni

    2016-08-01

    The Geosynchronous SAR (GEOSAR) is a system designed for continuous monitoring of a fixed region of the Earth. Differently from LEOSAR, the GEOSAR system requires very long times to form its Synthetic Aperture (SA). This entails the onset of several decorrelation sources, such as atmosphere propagation, orbit perturbations, clock drifts, that have to be compensated to avoid defocusing. In this paper, in particular, it is proposed a solution to cope with the phase error introduced by orbit perturbations within the SA by means of some Active Radar Calibrators (ARC) deployed at convenient positions in the illuminated area. Each ARC provides two-way pulse by pulse echo delay and carrier phase observations used to track the satellite position. The estimation follows an iterative approach whose steps are dividing the SA in sub-apertures, performing the estimation for each sub-aperture, applying the estimated orbit correction and repeating for longer sub-apertures.

  19. Estimating the age of healthy infants from quantitative myelin water fraction maps.

    PubMed

    Dean, Douglas C; O'Muircheartaigh, Jonathan; Dirks, Holly; Waskiewicz, Nicole; Lehman, Katie; Walker, Lindsay; Piryatinsky, Irene; Deoni, Sean C L

    2015-04-01

    The trajectory of the developing brain is characterized by a sequence of complex, nonlinear patterns that occur at systematic stages of maturation. Although significant prior neuroimaging research has shed light on these patterns, the challenge of accurately characterizing brain maturation, and identifying areas of accelerated or delayed development, remains. Altered brain development, particularly during the earliest stages of life, is believed to be associated with many neurological and neuropsychiatric disorders. In this work, we develop a framework to construct voxel-wise estimates of brain age based on magnetic resonance imaging measures sensitive to myelin content. 198 myelin water fraction (VF(M) ) maps were acquired from healthy male and female infants and toddlers, 3 to 48 months of age, and used to train a sigmoidal-based maturational model. The validity of the approach was then established by testing the model on 129 different VF(M) datasets. Results revealed the approach to have high accuracy, with a mean absolute percent error of 13% in males and 14% in females, and high predictive ability, with correlation coefficients between estimated and true ages of 0.945 in males and 0.94 in females. This work represents a new approach toward mapping brain maturity, and may provide a more faithful staging of brain maturation in infants beyond chronological or gestation-corrected age, allowing earlier identification of atypical regional brain development.

  20. Integral quantification accuracy estimation for reporter ion-based quantitative proteomics (iQuARI).

    PubMed

    Vaudel, Marc; Burkhart, Julia M; Radau, Sonja; Zahedi, René P; Martens, Lennart; Sickmann, Albert

    2012-10-05

    With the increasing popularity of comparative studies of complex proteomes, reporter ion-based quantification methods such as iTRAQ and TMT have become commonplace in biological studies. Their appeal derives from simple multiplexing and quantification of several samples at reasonable cost. This advantage yet comes with a known shortcoming: precursors of different species can interfere, thus reducing the quantification accuracy. Recently, two methods were brought to the community alleviating the amount of interference via novel experimental design. Before considering setting up a new workflow, tuning the system, optimizing identification and quantification rates, etc. one legitimately asks: is it really worth the effort, time and money? The question is actually not easy to answer since the interference is heavily sample and system dependent. Moreover, there was to date no method allowing the inline estimation of error rates for reporter quantification. We therefore introduce a method called iQuARI to compute false discovery rates for reporter ion based quantification experiments as easily as Target/Decoy FDR for identification. With it, the scientist can accurately estimate the amount of interference in his sample on his system and eventually consider removing shadows subsequently, a task for which reporter ion quantification might not be the solution of choice.

  1. Quantitative modelling to estimate the transfer of pharmaceuticals through the food production system.

    PubMed

    Chiţescu, Carmen Lidia; Nicolau, Anca Ioana; Römkens, Paul; Van Der Fels-Klerx, H J

    2014-01-01

    Use of pharmaceuticals in animal production may cause an indirect route of contamination of food products of animal origin. This study aimed to assess, through mathematical modelling, the transfer of pharmaceuticals from contaminated soil, through plant uptake, into the dairy food production chain. The scenarios, model parameters, and values refer to contaminants in emission slurry production, storage time, immission into soil, plant uptake, bioaccumulation in the animal's body, and transfer to meat and milk. Modelling results confirm the possibility of contamination of dairy cow's meat and milk due the ingestion of contaminated feed by the cattle. The estimated concentration of pharmaceutical residues obtained for meat ranged from 0 to 6 ng kg(-1) for oxytetracycline, from 0.011 to 0.181 μg kg(-1) for sulfamethoxazole, and from 4.70 to 11.86 μg kg(-1) for ketoconazole. The estimated concentrations for milk were: zero for oxytetracycline, lower than 40 ng L(-1) for sulfamethoxazole, and from 0.98 to 2.48 μg L(-1) for ketoconazole. Results obtained for the three selected pharmaceuticals indicate a minor risk for human health. This study showed that supply chain modelling could be an effective tool in assessing the indirect contamination of feedstuff and animal products by residues of pharmaceuticals. The model can easily be adjusted to other contaminants and supply chain and, in this way, present a valuable tool to underpin decision making.

  2. Fluorescent polyion complex nanoparticle that incorporates an internal standard for quantitative analysis of protein kinase activity.

    PubMed

    Nobori, Takanobu; Shiosaki, Shujiro; Mori, Takeshi; Toita, Riki; Kim, Chan Woo; Nakamura, Yuta; Kishimura, Akihiro; Niidome, Takuro; Katayama, Yoshiki

    2014-05-21

    We demonstrate a polyion complex (PIC) nanoparticle that contains both a responsive fluorophore and an "internal standard" fluorophore for quantitative measurement of protein kinase (PK) activity. The PK-responsive fluorophore becomes more fluorescent with PK-catalyzed phosphorylation of substrate peptides incorporated in the PIC, while fluorescence from the internal standard remains unchanged during phosphorylation. This new concept will be useful for quantitative PK assays and the discovery of PK inhibitors.

  3. Age estimation during the blow fly intra-puparial period: a qualitative and quantitative approach using micro-computed tomography.

    PubMed

    Martín-Vega, Daniel; Simonsen, Thomas J; Wicklein, Martina; Hall, Martin J R

    2017-05-04

    Minimum post-mortem interval (minPMI) estimates often rely on the use of developmental data from blow flies (Diptera: Calliphoridae), which are generally the first colonisers of cadavers and, therefore, exemplar forensic indicators. Developmental data of the intra-puparial period are of particular importance, as it can account for more than half of the developmental duration of the blow fly life cycle. During this period, the insect undergoes metamorphosis inside the opaque, barrel-shaped puparium, formed by the hardening and darkening of the third instar larval cuticle, which shows virtually no external changes until adult emergence. Regrettably, estimates based on the intra-puparial period are severely limited due to the lack of reliable, non-destructive ageing methods and are frequently based solely on qualitative developmental markers. In this study, we use non-destructive micro-computed tomography (micro-CT) for (i) performing qualitative and quantitative analyses of the morphological changes taking place during the intra-puparial period of two forensically relevant blow fly species, Calliphora vicina and Lucilia sericata, and (ii) developing a novel and reliable method for estimating insect age in forensic practice. We show that micro-CT provides age-diagnostic qualitative characters for most 10% time intervals of the total intra-puparial period, which can be used over a range of temperatures and with a resolution comparable to more invasive and time-consuming traditional imaging techniques. Moreover, micro-CT can be used to yield a quantitative measure of the development of selected organ systems to be used in combination with qualitative markers. Our results confirm micro-CT as an emerging, powerful tool in medico-legal investigations.

  4. On-line, adaptive state estimator for active noise control

    NASA Technical Reports Server (NTRS)

    Lim, Tae W.

    1994-01-01

    Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control

  5. Prediction of toxicity of phenols and anilines to algae by quantitative structure-activity relationship.

    PubMed

    Lu, Guang-Hua; Wang, Chao; Guo, Xiao-Ling

    2008-06-01

    To measure the toxicity of phenol, aniline, and their derivatives to algae and to assess, model, and predict the toxicity using quantitative structure-activity relationship (QSAR) method. Oxygen production was used as the response endpoint for assessing the toxic effects of chemicals on algal photosynthesis. The energy of the lowest unoccupied molecular orbital (E(LUMO)) and the energy of the highest occupied molecular orbital (E(HOMO)) were obtained from the ChemOffice 2004 program using the quantum chemical method MOPAC, and the frontier orbital energy gap (deltaE) was obtained. The compounds exhibited a reasonably wide range of algal toxicity. The most toxic compound was alpha-naphthol, whereas the least toxic one was aniline. A two-descriptor model was derived from the algal toxicity and structural parameters: log1/EC50 = 0.268,logKow - 1.006deltaE + 11.769 (n = 20, r2 = 0.946). This model was stable and satisfactory for predicting toxicity. CONCLUSION Phenol, aniline, and their derivatives are polar narcotics. Their toxicity is greater than estimated by hydrophobicity only, and addition of the frontier orbital energy gap deltaE can significantly improve the prediction of logKow-dependent models.

  6. Quantitative structure-activity relationships for green algae growth inhibition by polymer particles.

    PubMed

    Nolte, Tom M; Peijnenburg, Willie J G M; Hendriks, A Jan; van de Meent, Dik

    2017-03-19

    After use and disposal of chemical products, many types of polymer particles end up in the aquatic environment with potential toxic effects to primary producers like green algae. In this study, we have developed Quantitative Structure-Activity Relationships (QSARs) for a set of highly structural diverse polymers which are capable to estimate green algae growth inhibition (EC50). The model (N = 43, R(2) = 0.73, RMSE = 0.28) is a regression-based decision tree using one structural descriptor for each of three polymer classes separated based on charge. The QSAR is applicable to linear homo polymers as well as copolymers and does not require information on the size of the polymer particle or underlying core material. Highly branched polymers, non-nitrogen cationic polymers and polymeric surfactants are not included in the model and thus cannot be evaluated. The model works best for cationic and non-ionic polymers for which cellular adsorption, disruption of the cell wall and photosynthesis inhibition were the mechanisms of action. For anionic polymers, specific properties of the polymer and test characteristics need to be known for detailed assessment. The data and QSAR results for anionic polymers, when combined with molecular dynamics simulations indicated that nutrient depletion is likely the dominant mode of toxicity. Nutrient depletion in turn, is determined by the non-linear interplay between polymer charge density and backbone flexibility.

  7. Quantitative estimation of flagellate community structure and diversity in soil samples.

    PubMed

    Ekelund, F; Rønn, R; Griffiths, B S

    2001-12-01

    Heterotrophic flagellates occur in nearly all soils and, in most cases, many different species are present. Nevertheless, quantitative data on their community structure and diversity are sparse, possibly due to a lack of suitable techniques. Previous studies have tended to focus on either total flagellate numbers and biomass, or the identification and description of flagellate species present. With the increased awareness of the role of biodiversity and of food web interactions, the quantification of species within the community and their response to environmental change is likely to become more important. The present paper describes a modification of the most probable number method that allows such a quantification of individual flagellate morphotypes in soil samples. Observations were also made on the biomass of flagellate morphotypes in soil. 20 to 25 morphotypes of heterotrophic flagellates were detectable per gram of two different arable soils, which were treated experimentally to test the technique. One of the soils was fumigated with chloroform vapour for different lengths of time (0, 0.5, 2 or 24 hours); this led to a reduction in the number of morphotypes, in the Shannon diversity index and in the evenness. The other soil was planted with wheat, and while rhizosphere soils contained the same morphotypes as bulk soil, the abundance of individual morphotypes was significantly different and the Shannon diversity index in rhizosphere soils was significantly higher. Soil influenced by an elevated CO2 level likewise differed significantly in morphotype abundance when compared to soil exposed to ambient levels of CO2. The technique recovered more than 80% of the discernible morphotypes and could also be used to quantify amoebal and ciliate communities in a similar way.

  8. Noninvasive and quantitative intracranial pressure estimation using ultrasonographic measurement of optic nerve sheath diameter

    PubMed Central

    Wang, Li-juan; Yao, Yan; Feng, Liang-shu; Wang, Yu-zhi; Zheng, Nan-nan; Feng, Jia-chun; Xing, Ying-qi

    2017-01-01

    We aimed to quantitatively assess intracranial pressure (ICP) using optic nerve sheath diameter (ONSD) measurements. We recruited 316 neurology patients in whom ultrasonographic ONSD was measured before lumbar puncture. They were randomly divided into a modeling and a test group at a ratio of 7:3. In the modeling group, we conducted univariate and multivariate analyses to assess associations between ICP and ONSD, age, sex, BMI, mean arterial blood pressure, diastolic blood pressure. We derived the mathematical function “Xing & Wang” from the modelling group to predict ICP and evaluated the function in the test group. In the modeling group, ICP was strongly correlated with ONSD (r = 0.758, p < 0.001), and this association was independent of other factors. The mathematical function was ICP = −111.92 + 77.36 × ONSD (Durbin-Watson value = 1.94). In the test group, a significant correlation was found between the observed and predicted ICP (r = 0.76, p < 0.001). Bland-Altman analysis yielded a mean difference between measurements of −0.07 ± 41.55 mmH2O. The intraclass correlation coefficient and its 95%CIs for noninvasive ICP assessments using our prediction model was 0.86 (0.79–0.90). Ultrasonographic ONSD measurements provide a potential noninvasive method to quantify ICP that can be conducted at the bedside. PMID:28169341

  9. Estimation of spinopelvic muscles' volumes in young asymptomatic subjects: a quantitative analysis.

    PubMed

    Amabile, Celia; Moal, Bertrand; Chtara, Oussama Arous; Pillet, Helene; Raya, Jose G; Iannessi, Antoine; Skalli, Wafa; Lafage, Virginie; Bronsard, Nicolas

    2017-04-01

    Muscles have been proved to be a major component in postural regulation during pathological evolution or aging. Particularly, spinopelvic muscles are recruited for compensatory mechanisms such as pelvic retroversion, or knee flexion. Change in muscles' volume could, therefore, be a marker of greater postural degradation. Yet, it is difficult to interpret spinopelvic muscular degradation as there are few reported values for young asymptomatic adults to compare to. The objective was to provide such reference values on spinopelvic muscles. A model predicting the muscular volume from reduced set of MRI segmented images was investigated. A total of 23 asymptomatic subjects younger than 24 years old underwent an MRI acquisition from T12 to the knee. Spinopelvic muscles were segmented to obtain an accurate 3D reconstruction, allowing precise computation of muscle's volume. A model computing the volume of muscular groups from less than six MRI segmented slices was investigated. Baseline values have been reported in tables. For all muscles, invariance was found for the shape factor [ratio of volume over (area times length): SD < 0.04] and volume ratio over total volume (SD < 1.2 %). A model computing the muscular volume from a combination of two to five slices has been evaluated. The five-slices model prediction error (in  % of the real volume from 3D reconstruction) ranged from 6 % (knee flexors and extensors and spine flexors) to 11 % (spine extensors). Spinopelvic muscles' values for a reference population have been reported. A new model predicting the muscles' volumes from a reduced set of MRI slices is proposed. While this model still needs to be validated on other populations, the current study appears promising for clinical use to determine, quantitatively, the muscular degradation.

  10. Quantitative estimate of pion fluctuation and its multiplicity dependence in nuclear collisions

    NASA Astrophysics Data System (ADS)

    Ghosh, Dipak; Deb, Argha; Dutta, Srimonti

    2009-02-01

    This paper presents the results of an investigation on the multiplicity dependence of the fluctuation pattern of pions for the entire accelerator energy range from 2.1 to 200 AGeV. The data set for produced pions is divided into four sets depending on the number of shower tracks ns. Analysis is carried out in two-dimensional η-phi space with the Hurst exponent to take care of the anisotropy of the phase space. The Hurst exponent is extracted by fitting one-dimensional factorial moment saturation curves to Ochs' saturation formula. The values of the effective fluctuation strength α eff are estimated and multiplicity dependence is studied w.r.t. α eff and the Hurst exponent H. It is highly interesting to observe that both fluctuation strength and degree of anisotropy (characterized by H) depend on pion multiplicity. The multiplicity dependence is more pronounced at lower projectile energy. The results of the study are discussed in detail.

  11. Quantitative estimation of DNA damage by photon irradiation based on the microdosimetric-kinetic model

    PubMed Central

    Matsuya, Yusuke; Ohtsubo, Yosuke; Tsutsumi, Kaori; Sasaki, Kohei; Yamazaki, Rie; Date, Hiroyuki

    2014-01-01

    The microdosimetric-kinetic (MK) model is one of the models that can describe the fraction of cells surviving after exposure to ionizing radiation. In the MK model, there are specific parameters, k and yD, where k is an inherent parameter to represent the number of potentially lethal lesions (PLLs) and yD indicates the dose-mean lineal energy in keV/μm. Assuming the PLLs to be DNA double-strand breaks (DSBs), the rate equations are derived for evaluating the DSB number in the cell nucleus. In this study, we estimated the ratio of DSBs for two types of photon irradiation (6 MV and 200 kVp X-rays) in Chinese hamster ovary (CHO-K1) cells and human non-small cell lung cancer (H1299) cells by observing the surviving fraction. The estimated ratio was then compared with the ratio of γ-H2AX foci using immunofluorescent staining. For making a comparison of the number of DSBs among a variety of radiation energy cases, we next utilized the survival data in the literature for both cells exposed to other photon types, such as 60Co γ-rays, 137Cs γ-rays and 100 kVp X-rays. The ratio of DSBs based on the MK model with conventional data was consistent with the ratio of γ-H2AX foci numbers, confirming that the γ-H2AX focus is indicative of DSBs. It was also shown that the larger yD is, the larger the DSB number is. These results suggest that k and yD represent the characteristics of the surviving fraction and the biological effects for photon irradiation. PMID:24515253

  12. Parametric estimation of sample entropy for physical activity recognition.

    PubMed

    Aktaruzzaman, Md; Scarabottolo, Nello; Sassi, Roberto

    2015-08-01

    Insufficient amount of physical activity, and hence storage of calories may lead depression, obesity, cardiovascular diseases, and diabetes. The amount of consumed calorie depends on the type of activity. The recognition of physical activity is very important to estimate the amount of calories spent by a subject every day. There are some research works already published in the literature for activity recognition through accelerometers (body worn sensors). The accuracy of any recognition system depends on the robustness of selected features and classifiers. The typical features reported for most physical activities recognitions are autoregressive coefficients (ARcoeffs), signal magnitude area (SMA), tilt angle (TA), and standard deviation (STD). In this study, we have studied the feasibility of using single value of sample entropy estimated parametrically (SETH) of an AR model instead of ARcoeffs. After feasibility study, we also compared the recognition accuracies between two popular classifiers ı.e. artificial neural network (ANN) and support vector machines (SVM). The recognition accuracies using linear structure (where all types of activities are classified using a single classifier) and hierarchical structure (where activities are first divided into static and dynamic events, and then activities of each event are classified in the second stage). The study showed that the use of SETH provides similar recognition accuracy (69.82%) as provided by ARcoeffs (67.67%) using ANN. The linear structure of SVM performs better (average accuracy of SVM: 98.22%) than linear ANN (average accuracy with ANN: 94.78%). The use of hierarchical structure of ANN increases the average recognition accuracy of static activities to about 100%. However, no significant changes are observed using hierarchical SVM than the linear one.

  13. Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning

    DTIC Science & Technology

    2008-01-01

    ranging from the income level to age and her preference order over a set of products (e.g. movies in Netflix ). The ranking task is to learn a map- ping...learners in RankBoost. However, in both cases, the proposed strategy selects the samples which are estimated to produce a faster convergence from the...steps in Section 5. 2. Related Work A number of strategies have been proposed for active learning in the classification framework. Some of those center

  14. Quantitative estimation of 3-D fiber course in gross histological sections of the human brain using polarized light.

    PubMed

    Axer, H; Axer, M; Krings, T; Keyserlingk, D G

    2001-02-15

    Series of polarized light images can be used to achieve quantitative estimates of the angles of inclination (z-direction) and direction (in xy-plane) of central nervous fibers in histological sections of the human brain. (1) The corpus callosum of a formalin-fixed human brain was sectioned at different angles of inclination of nerve fibers and at different thicknesses of the samples. The minimum, and maximum intensities, and their differences revealed a linear relationship to the angle of inclination of fibers. It was demonstrated that sections with a thickness of 80--120 microm are best suited for estimating the angle of inclination. (2) Afterwards the optic tracts of eight formalin-fixed human brains were sliced at different angles of fiber inclination at 100 microm. Measurements of intensity in 30 pixels in each section were used to calculate a linear function of calibration. The maximum intensities and the differences between maximum and minimum values measured with two polars only were best suited for estimation of fiber inclination. (3) Gross histological brain slices of formalin-fixed human brains were digitized under azimuths from 0 to 80 degrees using two polars only. These sequences were used to estimate the inclination of fibers (in z-direction). The same slices were digitized under azimuths from 0 to 160 degrees in steps of 20 degrees using a quarter wave plate additionally. These sequences were used to estimate the direction of the fibers in xy-direction. The method can be used to produce maps of fiber orientation in gross histological sections of the human brain similar to the fiber orientation maps derived by diffusion weighted magnetic resonance imaging.

  15. A quantitative framework to estimate the relative importance of environment, spatial variation and patch connectivity in driving community composition.

    PubMed

    Monteiro, Viviane F; Paiva, Paulo C; Peres-Neto, Pedro R

    2017-03-01

    Perhaps the most widely used quantitative approach in metacommunity ecology is the estimation of the importance of local environment vs. spatial structuring using the variation partitioning framework. Contrary to metapopulation models, however, current empirical studies of metacommunity structure using variation partitioning assume a space-for-dispersal substitution due to the lack of analytical frameworks that incorporate patch connectivity predictors of dispersal dynamics. Here, a method is presented that allows estimating the relative importance of environment, spatial variation and patch connectivity in driving community composition variation within metacommunities. The proposed approach is illustrated by a study designed to understand the factors driving the structure of a soft-bottom marine polychaete metacommunity. Using a standard variation partitioning scheme (i.e. where only environmental and spatial predictors are used), only about 13% of the variation in metacommunity structure was explained. With the connectivity set of predictors, the total amount of explained variation increased up to 51% of the variation. These results highlight the importance of considering predictors of patch connectivity rather than just spatial predictors. Given that information on connectivity can be estimated by commonly available data on species distributions for a number of taxa, the framework presented here can be readily applied to past studies as well, facilitating a more robust evaluation of the factors contributing to metacommunity structure.

  16. Validation of Body Condition Indices and Quantitative Magnetic Resonance in Estimating Body Composition in a Small Lizard

    PubMed Central

    WARNER, DANIEL A.; JOHNSON, MARIA S.; NAGY, TIM R.

    2017-01-01

    Measurements of body condition are typically used to assess an individual’s quality, health, or energetic state. Most indices of body condition are based on linear relationships between body length and mass. Although these indices are simple to obtain, nonlethal, and useful indications of energetic state, their accuracy at predicting constituents of body condition (e.g., fat and lean mass) are often unknown. The objectives of this research were to (1) validate the accuracy of another simple and noninvasive method, quantitative magnetic resonance (QMR), at estimating body composition in a small-bodied lizard, Anolis sagrei, and (2) evaluate the accuracy of two indices of body condition (based on length–mass relationships) at predicting body fat, lean, and water mass. Comparisons of results from QMR scans to those from chemical carcass analysis reveal that QMR measures body fat, lean, and water mass with excellent accuracy in male and female lizards. With minor calibration from regression equations, QMR will be a reliable method of estimating body composition of A. sagrei. Body condition indices were positively related to absolute estimates of each constituent of body composition, but these relationships showed considerable variation around regression lines. In addition, condition indices did not predict fat, lean, or water mass when adjusted for body mass. Thus, our results emphasize the need for caution when interpreting body condition based upon linear measurements of animals. Overall, QMR provides an alternative noninvasive method for accurately measuring fat, lean, and water mass in these small-bodied animals. PMID:28035770

  17. Quantitative estimation of land surface characteristic parameters and evapotranspiration in the Nagqu river basin over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Zhong, Lei; Ma, Yaoming; Su, Z. Bob; Ma, Weiqiang; Zou, Mijun; Wang, Binbin; Han, Cunbo; Hu, Yuanyuan

    2016-04-01

    Evapotranspiration is an important component of the water cycle in the Tibetan Plateau. It is controlled by many hydrological and meteorological factors. Therefore, it is of great significance to estimate the evapotranspiration accurately and continuously. It is also drawing much attention of scientific community to understand land surface parameters and land-atmosphere water exchange processes in small watershed-scale areas. Based on in-situ conventional meteorological data in the Nagqu river basin and surrounding regions, the point-scale evapotranspiration distribution characteristics in the study area were quantitatively estimated while the main meteorological factors affecting the evaporation process were analyzed. Both polar orbiting and geostationary satellite data with different spatial resolutions (such as Landsat, SPOT, MODIS, FY-2C) were used to derive the surface characteristics in the river basin simultaneously. A time series processing was applied to remove the cloud cover and reconstruct data series. Combined with the meteorological observation data in Nagqu river basin and surrounding regions, evapotranspiration in the small watershed area of alpine region was estimated and validated by remote sensing parameterization scheme. Thus typical spatio-temporal variation characteristics of evapotranspiration in small watershed of an alpine region were successfully revealed.

  18. Usefulness of the automatic quantitative estimation tool for cerebral blood flow: clinical assessment of the application software tool AQCEL.

    PubMed

    Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi

    2011-01-01

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation

  19. Quantitative estimation of UV light dose concomitant to irradiation with ionizing radiation

    NASA Astrophysics Data System (ADS)

    Petin, Vladislav G.; Morozov, Ivan I.; Kim, Jin Kyu; Semkina, Maria A.

    2011-01-01

    A simple mathematical model for biological estimation of UV light dose concomitant to ionizing radiation was suggested. This approach was applied to determine the dependency of equivalent UV light dose accompanied by 100 Gy of ionizing radiation on energy of sparsely ionizing radiation and on volume of the exposed cell suspension. It was revealed that the relative excitation contribution to the total lethal effect and the value of UV dose was greatly increased with an increase in energy of ionizing radiation and volume of irradiated suspensions. It is concluded that these observations are in agreement with the supposition that Čerenkov emission is responsible for the production of UV light damage and the phenomenon of photoreactivation observed after ionizing exposure of bacterial and yeast cells hypersensitive to UV light. A possible synergistic interaction of the damages produced by ionizations and excitations as well as a probable participation of UV component of ionizing radiation in the mechanism of hormesis and adaptive response observed after ionizing radiation exposure is discussed.

  20. The interannual trend and preliminary quantitative estimation of the oceans condition in the Bohai Sea area

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Yang, Jin-kun; Miao, Qing-sheng; Gao, Xiu-min

    2017-01-01

    After different frequency observed temperature data of Bohai sea marine observation stations was analyzed, results showed that daily average sea surface temperature value obtained from 3-hour-observations (08h, 14h, 20h) which was slightly lower than that from 24-hour-observations. The general gap was within 0.10°, while the value was 0.05° for the monthly mean temperature. Daily average sea surface temperature values of 3-hour-observations have a little effect both in statistical properties and the accuracy of statistical data, which does not affect the representativeness of the data. It can be used in studying long time series problems. Analyzing the trend of the SST data changes in 1960-2012 and the SAT data changes in 1965-2012 by using the linear tendency estimate and cumulative distance square method, it can prove that SST annual variation rate was 0.010°/a, with a total increase of 0.53° in last 53 years; and SAT annual variation rate was 0.043°/a, with a total increase of 2.06° in last 48 years. Although the long-term trend of these two factors is significantly increased, but there was a significant mutation around 1987. From 1960 to 1987, it had a downward trend, after 1987 it began to grow, the upward trend has not diminished until 2009.

  1. [Quantitative topographic characterization of the myoelectric activity distribution of the masseter muscle: mapping of spectral EMG parameters].

    PubMed

    Scholle, H C; Schumann, N P; Anders, C; Mey, E

    1992-09-01

    A new method for quantitative characterization of myoelectrical masseter activity distribution by mapping of spectral EMG-parameters is described. The surface electromyograms of M. masseter were monopolarly recorded (16 channels). On the basis of registered EMG intervals (512 ms) the spectral EMG power of several frequency bands was calculated (Fast Fourier Transformation). The spectral EMG parameters between the 16 electrode positions were estimated by linear interpolation (4-nearest neighbours algorithm). Afterwards the spectral EMG parameters were fitted in a grey-tone or colour scale with 10 intervals. The so obtained EMG activity maps ("EMG-Maps") permit a quantitative-topographic characterization of myoelectrical masseter activity during different functional load procedures. The frequency range which is to consider in masseter surface-EMG investigations encloses frequencies between 15 and 500 Hz. The topography of EMG activation pattern of M. masseter is only described in a comprehensive manner when the electrode array consists of 16 electrodes and more. During defined motor tasks like clenching with controlled forces the reproducibility of EMG-Maps which respect to the topography of EMG activity pattern is very high. The absolute values of spectral EMG power as well as power changes of selected band ranges during clenching correlate to the extent of chewing forces.

  2. Synthesis, photodynamic activity, and quantitative structure-activity relationship modelling of a series of BODIPYs.

    PubMed

    Caruso, Enrico; Gariboldi, Marzia; Sangion, Alessandro; Gramatica, Paola; Banfi, Stefano

    2017-02-01

    Here we report the synthesis of eleven new BODIPYs (14-24) characterized by the presence of an aromatic ring on the 8 (meso) position and of iodine atoms on the pyrrolic 2,6 positions. These molecules, together with twelve BODIPYs already reported by us (1-12), represent a large panel of BODIPYs showing different atoms or groups as substituent of the aromatic moiety. Two physico-chemical features ((1)O2 generation rate and lipophilicity), which can play a fundamental role in the outcome as photosensitizers, have been studied. The in vitro photo-induced cell-killing efficacy of 23 PSs was studied on the SKOV3 cell line treating the cells for 24h in the dark then irradiating for 2h with a green LED device (fluence 25.2J/cm(2)). The cell-killing efficacy was assessed with the MTT test and compared with that one of meso un-substituted compound (13). In order to understand the possible effect of the substituents, a predictive quantitative structure-activity relationship (QSAR) regression model, based on theoretical holistic molecular descriptors, was developed. The results clearly indicate that the presence of an aromatic ring is fundamental for an excellent photodynamic response, whereas the electronic effects and the position of the substituents on the aromatic ring do not influence the photodynamic efficacy. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Antiproliferative Pt(IV) complexes: synthesis, biological activity, and quantitative structure-activity relationship modeling.

    PubMed

    Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico

    2010-09-01

    Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.

  4. Quantitative and qualitative estimates of cross-border tobacco shopping and tobacco smuggling in France.

    PubMed

    Lakhdar, C Ben

    2008-02-01

    In France, cigarette sales have fallen sharply, especially in border areas, since the price increases of 2003 and 2004. It was proposed that these falls were not due to people quitting smoking but rather to increased cross-border sales of tobacco and/or smuggling. This paper aims to test this proposition. Three approaches have been used. First, cigarette sales data from French sources for the period 1999-2006 were collected, and a simulation of the changes seen within these sales was carried out in order to estimate what the sales situation would have looked like without the presence of foreign tobacco. Second, the statements regarding tobacco consumed reported by the French population with registered tobacco sales were compared. Finally, in order to identify the countries of origin of foreign tobacco entering France, we collected a random sample of cigarette packs from a waste collection centre. According to the first method, cross-border shopping and smuggling of tobacco accounted for 8635 tones of tobacco in 2004, 9934 in 2005, and 9930 in 2006, ie, between 14% and 17% of total sales. The second method gave larger results: the difference between registered cigarette sales and cigarettes declared as being smoked was around 12,000 to 13,000 tones in 2005, equivalent to 20% of legal sales. The collection of cigarette packs at a waste collection centre showed that foreign cigarettes accounted for 18.6% of our sample in 2005 and 15.5% in 2006. France seems mainly to be a victim of cross-border purchasing of tobacco products, with the contraband market for tobacco remaining modest. in order to avoid cross-border purchases, an increased harmonization of national policies on the taxation of tobacco products needs to be envisaged by the European Union.

  5. Quantitative estimates of metamorphic equilibria: Tallassee synform, Dadeville belt, Alabama's Inner Piedmont

    SciTech Connect

    Drummond, M.S.; Neilson, M.J. . Dept. of Geology)

    1993-03-01

    The Tallassee synform is the major structural feature in the western part of the Dadeville belt. This megascopic F2 structure folds amphibolite (Ropes Creek Amphibolite) and metasedimentary units (Agricola Schist, AS), as well as tonalitic (Camp Hill Gneiss, CHG), granitic (Chattasofka Creek Gneiss, CCG), and mafic-ultramafic plutons (Doss Mt. and Slaughters suites). Acadian-age prograde regional metamorphism preceded the F2 folding event, producing the pervasive S1 foliation and metamorphic recrystallization. Prograde mineralogy in the metapelites and metagraywackes of the AS includes garnet, biotite, muscovite, plagioclase, kyanite, sillimanite, and epidote. The intrusive rocks, both felsic and mafic-ultramafic, are occasionally garnetiferous and provide suitable mineral assemblages for P-T evaluation. The AS yields a range of T-P from 512--635C and 5.1--5.5 kb. Muscovite from the AS exhibits an increase in Ti content from 0.07 to 0.15 Ti/22 O formula unit with progressively increasing T's from 512 to 635C. This observation is consistent with other studies that show increasing Ti content with increasing grade. A CHG sample records an average metamorphic T-P of 604C and 5.79 kb. Hornblende-garnet pairs from a Doss Mt. amphibolite sample provides an average metamorphic T of 607C. These data are consistent with regional Barrovian-type middle to upper amphibolite facies metamorphism for the Tallassee synform. Peak metamorphism is represented by kyanite-sillimanite zone conditions and localized migmatization of the AS. The lithotectonic belts bounding the Dadeville belt to the NW and SE are the eastern Blue Ridge and Opelika belts. Studies have shown that these belts have also experienced Acadian-age amphibolite facies metamorphism with comparable P-T estimates to those presented here. These data suggest that the eastern Blue Ridge and Inner Piedmont of AL experienced the same pervasive dynamothermal Barrovian-type metamorphic episode during Acadian orogenesis.

  6. Estimating Active Transportation Behaviors to Support Health Impact Assessment in the United States

    PubMed Central

    Mansfield, Theodore J.; Gibson, Jacqueline MacDonald

    2016-01-01

    Health impact assessment (HIA) has been promoted as a means to encourage transportation and city planners to incorporate health considerations into their decision-making. Ideally, HIAs would include quantitative estimates of the population health effects of alternative planning scenarios, such as scenarios with and without infrastructure to support walking and cycling. However, the lack of baseline estimates of time spent walking or biking for transportation (together known as “active transportation”), which are critically related to health, often prevents planners from developing such quantitative estimates. To address this gap, we use data from the 2009 US National Household Travel Survey to develop a statistical model that estimates baseline time spent walking and biking as a function of the type of transportation used to commute to work along with demographic and built environment variables. We validate the model using survey data from the Raleigh–Durham–Chapel Hill, NC, USA, metropolitan area. We illustrate how the validated model could be used to support transportation-related HIAs by estimating the potential health benefits of built environment modifications that support walking and cycling. Our statistical model estimates that on average, individuals who commute on foot spend an additional 19.8 (95% CI 16.9–23.2) minutes per day walking compared to automobile commuters. Public transit riders walk an additional 5.0 (95% CI 3.5–6.4) minutes per day compared to automobile commuters. Bicycle commuters cycle for an additional 28.0 (95% CI 17.5–38.1) minutes per day compared to automobile commuters. The statistical model was able to predict observed transportation physical activity in the Raleigh–Durham–Chapel Hill region to within 0.5 MET-hours per day (equivalent to about 9 min of daily walking time) for 83% of observations. Across the Raleigh–Durham–Chapel Hill region, an estimated 38 (95% CI 15–59) premature deaths potentially could

  7. Estimating Active Transportation Behaviors to Support Health Impact Assessment in the United States.

    PubMed

    Mansfield, Theodore J; Gibson, Jacqueline MacDonald

    2016-01-01

    Health impact assessment (HIA) has been promoted as a means to encourage transportation and city planners to incorporate health considerations into their decision-making. Ideally, HIAs would include quantitative estimates of the population health effects of alternative planning scenarios, such as scenarios with and without infrastructure to support walking and cycling. However, the lack of baseline estimates of time spent walking or biking for transportation (together known as "active transportation"), which are critically related to health, often prevents planners from developing such quantitative estimates. To address this gap, we use data from the 2009 US National Household Travel Survey to develop a statistical model that estimates baseline time spent walking and biking as a function of the type of transportation used to commute to work along with demographic and built environment variables. We validate the model using survey data from the Raleigh-Durham-Chapel Hill, NC, USA, metropolitan area. We illustrate how the validated model could be used to support transportation-related HIAs by estimating the potential health benefits of built environment modifications that support walking and cycling. Our statistical model estimates that on average, individuals who commute on foot spend an additional 19.8 (95% CI 16.9-23.2) minutes per day walking compared to automobile commuters. Public transit riders walk an additional 5.0 (95% CI 3.5-6.4) minutes per day compared to automobile commuters. Bicycle commuters cycle for an additional 28.0 (95% CI 17.5-38.1) minutes per day compared to automobile commuters. The statistical model was able to predict observed transportation physical activity in the Raleigh-Durham-Chapel Hill region to within 0.5 MET-hours per day (equivalent to about 9 min of daily walking time) for 83% of observations. Across the Raleigh-Durham-Chapel Hill region, an estimated 38 (95% CI 15-59) premature deaths potentially could be avoided if the entire

  8. Validation of Quantitative Structure-Activity Relationship (QSAR) Model for Photosensitizer Activity Prediction

    PubMed Central

    Frimayanti, Neni; Yam, Mun Li; Lee, Hong Boon; Othman, Rozana; Zain, Sharifuddin M.; Rahman, Noorsaadah Abd.

    2011-01-01

    Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR) method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT) activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA) method. Based on the method, r2 value, r2 (CV) value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 μM to 7.04 μM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set) of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set. PMID:22272096

  9. Agreement between clinical estimation and a new quantitative analysis by Photoshop software in fundus and angiographic image variables.

    PubMed

    Ramezani, Alireza; Ahmadieh, Hamid; Azarmina, Mohsen; Soheilian, Masoud; Dehghan, Mohammad H; Mohebbi, Mohammad R

    2009-12-01

    To evaluate the validity of a new method for the quantitative analysis of fundus or angiographic images using Photoshop 7.0 (Adobe, USA) software by comparing with clinical evaluation. Four hundred and eighteen fundus and angiographic images of diabetic patients were evaluated by three retina specialists and then by computing using Photoshop 7.0 software. Four variables were selected for comparison: amount of hard exudates (HE) on color pictures, amount of HE on red-free pictures, severity of leakage, and the size of the foveal avascular zone (FAZ). The coefficient of agreement (Kappa) between the two methods in the amount of HE on color and red-free photographs were 85% (0.69) and 79% (0.59), respectively. The agreement for severity of leakage was 72% (0.46). In the two methods for the evaluation of the FAZ size using the magic and lasso software tools, the agreement was 54% (0.09) and 89% (0.77), respectively. Agreement in the estimation of the FAZ size by the lasso magnetic tool was excellent and was almost as good in the quantification of HE on color and on red-free images. Considering the agreement of this new technique for the measurement of variables in fundus images using Photoshop software with the clinical evaluation, this method seems to have sufficient validity to be used for the quantitative analysis of HE, leakage, and FAZ size on the angiograms of diabetic patients.

  10. Quantitative analysis of O-isopropyl methylphosphonic acid in serum samples of Japanese citizens allegedly exposed to sarin: estimation of internal dosage.

    PubMed

    Noort, D; Hulst, A G; Platenburg, D H; Polhuijs, M; Benschop, H P

    1998-10-01

    A convenient and rapid micro-anion exchange liquid chromatography (LC) tandem electrospray mass spectrometry (MS) procedure was developed for quantitative analysis in serum of O-isopropyl methylphosphonic acid (IMPA), the hydrolysis product of the nerve agent sarin. The mass spectrometric procedure involves negative or positive ion electrospray ionization and multiple reaction monitoring (MRM) detection. The method could be successfully applied to the analysis of serum samples from victims of the Tokyo subway attack and of an earlier incident at Matsumoto, Japan. IMPA levels ranging from 2 to 135 ng/ml were found. High levels of IMPA appear to correlate with low levels of residual butyrylcholinesterase activity in the samples and vice versa. Based on our analyses, the internal and exposure doses of the victims were estimated. In several cases, the doses appeared to be substantially higher than the assumed lethal doses in man.

  11. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    PubMed

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.

  12. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  13. Quantitative structure-activity relationship analysis of acute toxicity of diverse chemicals to Daphnia magna with whole molecule descriptors.

    PubMed

    Moosus, M; Maran, U

    2011-10-01

    Quantitative structure-activity relationship analysis and estimation of toxicological effects at lower-mid trophic levels provide first aid means to understand the toxicity of chemicals. Daphnia magna serves as a good starting point for such toxicity studies and is also recognized for regulatory use in estimating the risk of chemicals. The ECOTOX database was queried and analysed for available data and a homogenous subset of 253 compounds for the endpoint LC50 48 h was established. A four-parameter quantitative structure-activity relationship was derived (coefficient of determination, r (2) = 0.740) for half of the compounds and internally validated (leave-one-out cross-validated coefficient of determination, [Formula: see text] = 0.714; leave-many-out coefficient of determination, [Formula: see text] = 0.738). External validation was carried out with the remaining half of the compounds (coefficient of determination for external validation, [Formula: see text] = 0.634). Two of the descriptors in the model (log P, average bonding information content) capture the structural characteristics describing penetration through bio-membranes. Another two descriptors (energy of highest occupied molecular orbital, weighted partial negative surface area) capture the electronic structural characteristics describing the interaction between the chemical and its hypothetic target in the cell. The applicability domain was subsequently analysed and discussed.

  14. Modified DTW for a quantitative estimation of the similarity between rainfall time series

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Barthès, Laurent; Mallet, Cécile; Chazottes, Aymeric

    2017-04-01

    The Precipitations are due to complex meteorological phenomenon and can be described as intermittent process. The spatial and temporal variability of this phenomenon is significant and covers large scales. To analyze and model this variability and / or structure, several studies use a network of rain gauges providing several time series of precipitation measurements. To compare these different time series, the authors compute for each time series some parameters (PDF, rain peak intensity, occurrence, amount, duration, intensity …). However, and despite the calculation of these parameters, the comparison of the parameters between two series of measurements remains qualitative. Due to the advection processes, when different sensors of an observation network measure precipitation time series identical in terms of intermitency or intensities, there is a time lag between the different measured series. Analyzing and extracting relevant information on physical phenomena from these precipitation time series implies the development of automatic analytical methods capable of comparing two time series of precipitation measured by different sensors or at two different locations and thus quantifying the difference / similarity. The limits of the Euclidean distance to measure the similarity between the time series of precipitation have been well demonstrated and explained (eg the Euclidian distance is indeed very sensitive to the effects of phase shift : between two identical but slightly shifted time series, this distance is not negligible). To quantify and analysis these time lag, the correlation functions are well established, normalized and commonly used to measure the spatial dependences that are required by many applications. However, authors generally observed that there is always a considerable scatter of the inter-rain gauge correlation coefficients obtained from the individual pairs of rain gauges. Because of a substantial dispersion of estimated time lag, the

  15. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  16. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  17. Quantitative structure-activity relationships and docking studies of calcitonin gene-related peptide antagonists.

    PubMed

    Kyani, Anahita; Mehrabian, Mohadeseh; Jenssen, Håvard

    2012-02-01

    Defining the role of calcitonin gene-related peptide in migraine pathogenesis could lead to the application of calcitonin gene-related peptide antagonists as novel migraine therapeutics. In this work, quantitative structure-activity relationship modeling of biological activities of a large range of calcitonin gene-related peptide antagonists was performed using a panel of physicochemical descriptors. The computational studies evaluated different variable selection techniques and demonstrated shuffling stepwise multiple linear regression to be superior over genetic algorithm-multiple linear regression. The linear quantitative structure-activity relationship model revealed better statistical parameters of cross-validation in comparison with the non-linear support vector regression technique. Implementing only five peptide descriptors into this linear quantitative structure-activity relationship model resulted in an extremely robust and highly predictive model with calibration, leave-one-out and leave-20-out validation R(2) of 0.9194, 0.9103, and 0.9214, respectively. We performed docking of the most potent calcitonin gene-related peptide antagonists with the calcitonin gene-related peptide receptor and demonstrated that peptide antagonists act by blocking access to the peptide-binding cleft. We also demonstrated the direct contact of residues 28-37 of the calcitonin gene-related peptide antagonists with the receptor. These results are in agreement with the conclusions drawn from the quantitative structure-activity relationship model, indicating that both electrostatic and steric factors should be taken into account when designing novel calcitonin gene-related peptide antagonists.

  18. Identification of Human Gustatory Cortex by Activation Likelihood Estimation

    PubMed Central

    Veldhuizen, Maria G.; Albrecht, Jessica; Zelano, Christina; Boesveldt, Sanne; Breslin, Paul; Lundström, Johan N.

    2010-01-01

    Over the last two decades, neuroimaging methods have identified a variety of taste-responsive brain regions. Their precise location, however, remains in dispute. For example, taste stimulation activates areas throughout the insula and overlying operculum, but identification of subregions has been inconsistent. Furthermore, literature reviews and summaries of gustatory brain activations tend to reiterate rather than resolve this ambiguity. Here we used a new meta-analytic method [activation likelihood estimation (ALE)] to obtain a probability map of the location of gustatory brain activation across fourteen studies. The map of activation likelihood values can also serve as a source of independent coordinates for future region-of-interest analyses. We observed significant cortical activation probabilities in: bilateral anterior insula and overlying frontal operculum, bilateral mid dorsal insula and overlying Rolandic operculum, and bilateral posterior insula/parietal operculum/postcentral gyrus, left lateral orbitofrontal cortex (OFC), right medial OFC, pregenual anterior cingulate cortex (prACC) and right mediodorsal thalamus. This analysis confirms the involvement of multiple cortical areas within insula and overlying operculum in gustatory processing and provides a functional “taste map” which can be used as an inclusive mask in the data analyses of future studies. In light of this new analysis, we discuss human central processing of gustatory stimuli and identify topics where increased research effort is warranted. PMID:21305668

  19. Quantitative estimation of foot-flat and stance phase of gait using foot-worn inertial sensors.

    PubMed

    Mariani, Benoit; Rouhani, Hossein; Crevoisier, Xavier; Aminian, Kamiar

    2013-02-01

    Time periods composing stance phase of gait can be clinically meaningful parameters to reveal differences between normal and pathological gait. This study aimed, first, to describe a novel method for detecting stance and inner-stance temporal events based on foot-worn inertial sensors; second, to extract and validate relevant metrics from those events; and third, to investigate their suitability as clinical outcome for gait evaluations. 42 subjects including healthy subjects and patients before and after surgical treatments for ankle osteoarthritis performed 50-m walking trials while wearing foot-worn inertial sensors and pressure insoles as a reference system. Several hypotheses were evaluated to detect heel-strike, toe-strike, heel-off, and toe-off based on kinematic features. Detected events were compared with the reference system on 3193 gait cycles and showed good accuracy and precision. Absolute and relative stance periods, namely loading response, foot-flat, and push-off were then estimated, validated, and compared statistically between populations. Besides significant differences observed in stance duration, the analysis revealed differing tendencies with notably a shorter foot-flat in healthy subjects. The result indicated which features in inertial sensors' signals should be preferred for detecting precisely and accurately temporal events against a reference standard. The system is suitable for clinical evaluations and provides temporal analysis of gait beyond the common swing/stance decomposition, through a quantitative estimation of inner-stance phases such as foot-flat.

  20. Development of combination tapered fiber-optic biosensor dip probe for quantitative estimation of interleukin-6 in serum samples

    NASA Astrophysics Data System (ADS)

    Wang, Chun Wei; Manne, Upender; Reddy, Vishnu B.; Oelschlager, Denise K.; Katkoori, Venkat R.; Grizzle, William E.; Kapoor, Rakesh

    2010-11-01

    A combination tapered fiber-optic biosensor (CTFOB) dip probe for rapid and cost-effective quantification of proteins in serum samples has been developed. This device relies on diode laser excitation and a charged-coupled device spectrometer and functions on a technique of sandwich immunoassay. As a proof of principle, this technique was applied in a quantitative estimation of interleukin IL-6. The probes detected IL-6 at picomolar levels in serum samples obtained from a patient with lupus, an autoimmune disease, and a patient with lymphoma. The estimated concentration of IL-6 in the lupus sample was 5.9 +/- 0.6 pM, and in the lymphoma sample, it was below the detection limit. These concentrations were verified by a procedure involving bead-based xMAP technology. A similar trend in the concentrations was observed. The specificity of the CTFOB dip probes was assessed by analysis with receiver operating characteristics. This analysis suggests that the dip probes can detect 5-pM or higher concentration of IL-6 in these samples with specificities of 100%. The results provide information for guiding further studies in the utilization of these probes to quantify other analytes in body fluids with high specificity and sensitivity.

  1. Development of combination tapered fiber-optic biosensor dip probe for quantitative estimation of interleukin-6 in serum samples

    PubMed Central

    Wang, Chun Wei; Manne, Upender; Reddy, Vishnu B.; Oelschlager, Denise K.; Katkoori, Venkat R.; Grizzle, William E.; Kapoor, Rakesh

    2010-01-01

    A combination tapered fiber-optic biosensor (CTFOB) dip probe for rapid and cost-effective quantification of proteins in serum samples has been developed. This device relies on diode laser excitation and a charged-coupled device spectrometer and functions on a technique of sandwich immunoassay. As a proof of principle, this technique was applied in a quantitative estimation of interleukin IL-6. The probes detected IL-6 at picomolar levels in serum samples obtained from a patient with lupus, an autoimmune disease, and a patient with lymphoma. The estimated concentration of IL-6 in the lupus sample was 5.9 ± 0.6 pM, and in the lymphoma sample, it was below the detection limit. These concentrations were verified by a procedure involving bead-based xMAP technology. A similar trend in the concentrations was observed. The specificity of the CTFOB dip probes was assessed by analysis with receiver operating characteristics. This analysis suggests that the dip probes can detect 5-pM or higher concentration of IL-6 in these samples with specificities of 100%. The results provide information for guiding further studies in the utilization of these probes to quantify other analytes in body fluids with high specificity and sensitivity. PMID:21198209

  2. Development of combination tapered fiber-optic biosensor dip probe for quantitative estimation of interleukin-6 in serum samples.

    PubMed

    Wang, Chun Wei; Manne, Upender; Reddy, Vishnu B; Oelschlager, Denise K; Katkoori, Venkat R; Grizzle, William E; Kapoor, Rakesh

    2010-01-01

    A combination tapered fiber-optic biosensor (CTFOB) dip probe for rapid and cost-effective quantification of proteins in serum samples has been developed. This device relies on diode laser excitation and a charged-coupled device spectrometer and functions on a technique of sandwich immunoassay. As a proof of principle, this technique was applied in a quantitative estimation of interleukin IL-6. The probes detected IL-6 at picomolar levels in serum samples obtained from a patient with lupus, an autoimmune disease, and a patient with lymphoma. The estimated concentration of IL-6 in the lupus sample was 5.9 ± 0.6 pM, and in the lymphoma sample, it was below the detection limit. These concentrations were verified by a procedure involving bead-based xMAP technology. A similar trend in the concentrations was observed. The specificity of the CTFOB dip probes was assessed by analysis with receiver operating characteristics. This analysis suggests that the dip probes can detect 5-pM or higher concentration of IL-6 in these samples with specificities of 100%. The results provide information for guiding further studies in the utilization of these probes to quantify other analytes in body fluids with high specificity and sensitivity.

  3. Specularly modified vegetation indices to estimate photosynthetic activity

    NASA Technical Reports Server (NTRS)

    Rondeaux, G.; Vanderbilt, V. C.

    1993-01-01

    The hypothesis tested was that some part of the ecosystem-dependent variability of vegetation indices was attributable to the effects of light specularly reflected by leaves. 'Minus specular' indices were defined excluding effects of specular light which contains no cellular pigment information. Results, both empirical and theoretical, show that the 'minus specular' indices, when compared to the traditional vegetation indices, potentially provide better estimates of the photosynthetic activity within a canopy - and therefore canopy primary production - specifically as a function of sun and view angles.

  4. MSFC solar activity predictions for satellite orbital lifetime estimation

    NASA Technical Reports Server (NTRS)

    Fuler, H. C.; Lundquist, C. A.; Vaughan, W. W.

    1979-01-01

    The procedure to predict solar activity indexes for use in upper atmosphere density models is given together with an example of the performance. The prediction procedure employs a least square linear regression model to generate the predicted smoothed vinculum R sub 13 and geomagnetic vinculum A sub p(13) values. Linear regression equations are then employed to compute corresponding vinculum F sub 10.7(13) solar flux values from the predicted vinculum R sub 13 values. The output is issued principally for satellite orbital lifetime estimations.

  5. Quantitative data for care of patients with systemic lupus erythematosus in usual clinical settings: a patient Multidimensional Health Assessment Questionnaire and physician estimate of noninflammatory symptoms.

    PubMed

    Askanase, Anca Dinu; Castrejón, Isabel; Pincus, Theodore

    2011-07-01

    To analyze quantitative data in patients with systemic lupus erythematosus (SLE), seen in usual care, from a patient Multidimensional Health Assessment Questionnaire (MDHAQ) with routine assessment of patient index data (RAPID3) scores and from a physician global estimate of noninflammatory symptoms; and to compare results to self-report Systemic Lupus Activity Questionnaire (SLAQ) scores and 4 SLE indices: SLE Disease Activity Index-2K (SLEDAI-2K), British Isles Lupus Assessment Group (BILAG), Systemic Lupus Activity Measure (SLAM), and European Consensus Lupus Activity Measurement (ECLAM). Fifty consecutive patients with SLE were studied in usual care of one rheumatologist. All patients completed an MDHAQ/RAPID3 in this setting. Each patient also completed a SLAQ. The rheumatologist scored SLEDAI-2K, BILAG, SLAM, ECLAM, and 2 physician global estimates, one for overall status and one for noninflammatory symptoms. Patients were classified into 2 groups: "few" or "many" noninflammatory symptoms. Scores and indices were compared using correlations, cross-tabulations and t tests. The patients included 45 women and 5 men. MDHAQ/RAPID3 and SLAQ scores were significantly correlated. RAPID3 scores were significantly higher in patients with SLE index scores above median levels, and in 34 patients scored by the rheumatologist as having "few" noninflammatory symptoms. MDHAQ/RAPID3 and SLAQ were significantly higher in 16 patients scored as having many noninflammatory symptoms. MDHAQ/RAPID3 and SLAQ subscale scores appear to reflect disease activity in patients with SLE, but not in patients with many noninflammatory symptoms. A physician scale for noninflammatory symptoms is useful to interpret MDHAQ/RAPID3, SLAQ, and SLE index scores.

  6. Simultaneous quantitative analysis of 12 methoxyflavones with melanogenesis inhibitory activity from the rhizomes of Kaempferia parviflora.

    PubMed

    Ninomiya, Kiyofumi; Matsumoto, Taku; Chaipech, Saowanee; Miyake, Sohachiro; Katsuyama, Yushi; Tsuboyama, Akihiro; Pongpiriyadacha, Yutana; Hayakawa, Takao; Muraoka, Osamu; Morikawa, Toshio

    2016-04-01

    A methanol extract from the rhizomes of Kaempferia parviflora Wall. ex Baker (Zingiberaceae) has shown inhibitory effects against melanogenesis in theophylline-stimulated murine B16 melanoma 4A5 cells (IC50 = 9.6 μg/mL). Among 25 flavonoids and three acetophenones isolated previously (1-28), several constituents including 5-hydroxy-7,3',4'-trimethoxyflavone (6, IC50 = 8.8 μM), 5,7,3',4'-tetramethoxyflavone (7, 8.6 μM), 5,3'-dihydroxy-3,7,4'-trimethoxyflavone (12, 2.9 μM), and 5-hydroxy-3,7,3',4'-tetramethoxyflavone (13, 3.5 μM) showed inhibitory effects without notable cytotoxicity at the effective concentrations. Compounds 6, 7, 12, and 13 inhibited the expression of tyrosinase, tyrosine-related protein (TRP)-1, and TRP-2 mRNA, which could be the mechanism of their melanogenesis inhibitory activity. In addition, a quantitative analytical method for 12 methoxyflavones (1, 2, 4-11, 13, and 14) in the extract was developed using HPLC. The optimal condition for separation and detection of these constituents were achieved on an ODS column (3 μm particle size, 2.1 mm i.d. × 100 mm) with MeOH-0.1 % aqueous acetic acid solvent systems as the mobile phase, and the detection and quantitation limits of the method were estimated to be 0.08-0.66 ng and 0.22-2.00ng, respectively. The relative standard deviation values of intra- and interday precision were lower than 0.95 and 1.08 %, respectively, overall mean recoveries of all flavonoids were 97.9-102.9 %, and the correlation coefficients of all the calibration curves showed good linearity within the test ranges. For validation of the protocol, extracts of three kinds of the plant's rhizomes collected from different regions in Thailand (Leoi, Phetchabun, and Chiang Mai provinces) were evaluated. The results indicated that the assay was reproducible, precise, and could be readily utilized for the quality evaluation of the plant materials.

  7. QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIPS FOR CHEMICAL REDUCTIONS OF ORGANIC CONTAMINANTS

    EPA Science Inventory

    Sufficient kinetic data on abiotic reduction reactions involving organic contaminants are now available that quantitative structure-activity relationships (QSARs) for these reactions can be developed. Over 50 QSARs have been reported, most in just the last few years, and they ar...

  8. 76 FR 9637 - Proposed Information Collection (Veteran Suicide Prevention Online Quantitative Surveys) Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-18

    ... AFFAIRS Proposed Information Collection (Veteran Suicide Prevention Online Quantitative Surveys) Activity... outreach efforts on the prevention of suicide among Veterans and their families. DATES: Written comments...). Type of Review: New collection. Abstract: VA's top priority is the prevention of Veterans suicide. It...

  9. 76 FR 27384 - Agency Information Collection Activity (Veteran Suicide Prevention Online Quantitative Surveys...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-11

    ... AFFAIRS Agency Information Collection Activity (Veteran Suicide Prevention Online Quantitative Surveys...). Type of Review: New collection. Abstract: VA's top priority is the prevention of Veterans suicide. It... better understand Veterans and their families' awareness of VA's suicide prevention and mental health...

  10. Design of cinnamaldehyde amino acid Schiff base compounds based on the quantitative structure–activity relationship

    Treesearch

    Hui Wang; Mingyue Jiang; Shujun Li; Chung-Yun Hse; Chunde Jin; Fangli Sun; Zhuo Li

    2017-01-01

    Cinnamaldehyde amino acid Schiff base (CAAS) is a new class of safe, bioactive compounds which could be developed as potential antifungal agents for fungal infections. To design new cinnamaldehyde amino acid Schiff base compounds with high bioactivity, the quantitative structure–activity relationships (QSARs) for CAAS compounds against Aspergillus niger (A. niger) and...

  11. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    PubMed

    Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X

    2016-05-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  12. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies

    PubMed Central

    Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X.

    2016-01-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the ‘missing heritability,’ which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/. PMID:27224861

  13. Using Activated Clotting Time to Estimate Intraoperative Aprotinin Concentration

    PubMed Central

    Iwata, Yusuke; Okamura, Toru; Zurakowski, David; Jonas, Richard A.

    2010-01-01

    Background Use of aprotinin during cardiopulmonary bypass may be associated with renal dysfunction due to renal excretion of excess drug. We hypothesized that the difference between standard celite activated clotting time (ACT), which is prolonged by aprotinin and kaolin ACT, could provide an estimate of aprotinin blood level. Methods Fresh porcine blood was collected from six donor pigs and heparinized. Blood was stored at 4°C, rewarmed and aprotinin was added: 0, 100, 200, and 400 kallikrein inhibitor units/ml. Specimens were incubated at 37°C. Two pairs of ACT tubes (one celite and one kaolin) were measured at 37°C and 20°C using two HEMOCRON 401 machines. A generalized estimating equation (GEE) statistical approach was used to estimate actual aprotinin from differences in celite and kaolin ACT. Result There was a significant relationship of the form y = exp(a+bx) between aprotinin concentration and difference between celite and kaolin ACT at both 37°C (R2 = 0.858) and 20°C (R2 = 0.743). Conclusion The time difference between celite and kaolin ACT may be a simple and inexpensive method for measuring the blood level of aprotinin during cardiopulmonary bypass. This technique may improve patient-specific dosing of aprotinin and reduce the risk of postoperative renal complications. PMID:20093334

  14. Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  15. Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  16. Colorimetric assays for quantitative analysis and screening of epoxide hydrolase activity.

    PubMed

    Cedrone, F; Bhatnagar, T; Baratti, Jacques C

    2005-12-01

    Focusing on directed evolution to tailor enzymes as usable biocatalysts for fine chemistry, we have studied in detail several colorimetric assays for quantitative analysis of epoxide hydrolase (EH) activity. In particular, two assays have been optimized to characterize variants issued from the directed evolution of the EH from Aspergillus niger. Assays described in this paper are sufficiently reliable for quantitative screening of EH activity in microtiter plates and are low cost alternatives to GC or MS analysis. Moreover, they are usable for various epoxides and not restricted to a type of substrate, such as those amenable to assay by UV absorbancy. They can be used to assay EH activity on any epoxide and to directly assay enantioselectivity when both (R) and (S) substrates are available. The advantages and drawbacks of these two methods to assay EH activity of a large number of natural samples are summarized.

  17. Research on the quantitative structure-carcinogenic activity relationship of N-nitroso compounds.

    PubMed

    Li-Jiao, Zhao; Ru-Gang, Zhong; Yan, Zhen; Qian-Huan, Dai

    2005-01-01

    According to the results of quantitative pattern recognition for 153 N-nitroso compounds (NNCs) based on Di-region theory, it is put forward that the esters formed from the metabolism of NNCs on ..- or ..-position could be alkylating agents of DNA bases with the anchimeric assistance of the N-nitroso group. This viewpoint, combined with the conception of ..-metabolism activation, can explain the structurecarcinogenic activity relationship reasonably. Quantum chemistry ab initio calculations are carried out to study the activity of different metabolites of NNCs in direct alkylation and the anchimeric assistant processes. By ONIOM method, a QM/MM calculation is carried out to study the crosslink of DNA base pair by methylalkylnitrosamines. Based on the computational results, the quantitative structure and carcinogenic activity relationship of 58 N-nitrosoureas that have got animal carcinogenicity tests reported are studied.

  18. Quantitative description of induced seismic activity before and after the 2011 Tohoku-Oki earthquake by nonstationary ETAS models

    NASA Astrophysics Data System (ADS)

    Kumazawa, Takao; Ogata, Yosihiko

    2013-12-01

    The epidemic-type aftershock sequence (ETAS) model is extended for application to nonstationary seismic activity, including transient swarm activity or seismicity anomalies, in a seismogenic region. The time-dependent rates of both background seismicity and aftershock productivity in the ETAS model are optimally estimated from hypocenter data. These rates can provide quantitative evidence for abrupt or gradual changes in shear stress and/or fault strength due to aseismic transient causes such as triggering by remote earthquakes, slow slips, or fluid intrusions within the region. This extended model is applied to data sets from several seismic events including swarms that were induced by the M9.0 Tohoku-Oki earthquake of 2011.

  19. Quantitative structure-activity relationships (QSARs) for the transformation of organic micropollutants during oxidative water treatment.

    PubMed

    Lee, Yunho; von Gunten, Urs

    2012-12-01

    Various oxidants such as chlorine, chlorine dioxide, ferrate(VI), ozone, and hydroxyl radicals can be applied for eliminating organic micropollutant by oxidative transformation during water treatment in systems such as drinking water, wastewater, and water reuse. Over the last decades, many second-order rate constants (k) have been determined for the reaction of these oxidants with model compounds and micropollutants. Good correlations (quantitative structure-activity relationships or QSARs) are often found between the k-values for an oxidation reaction of closely related compounds (i.e. having a common organic functional group) and substituent descriptor variables such as Hammett or Taft sigma constants. In this study, we developed QSARs for the oxidation of organic and some inorganic compounds and organic micropollutants transformation during oxidative water treatment. A number of 18 QSARs were developed based on overall 412 k-values for the reaction of chlorine, chlorine dioxide, ferrate, and ozone with organic compounds containing electron-rich moieties such as phenols, anilines, olefins, and amines. On average, 303 out of 412 (74%) k-values were predicted by these QSARs within a factor of 1/3-3 compared to the measured values. For HO(·) reactions, some principles and estimation methods of k-values (e.g. the Group Contribution Method) are discussed. The developed QSARs and the Group Contribution Method could be used to predict the k-values for various emerging organic micropollutants. As a demonstration, 39 out of 45 (87%) predicted k-values were found within a factor 1/3-3 compared to the measured values for the selected emerging micropollutants. Finally, it is discussed how the uncertainty in the predicted k-values using the QSARs affects the accuracy of prediction for micropollutant elimination during oxidative water treatment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Estimation of atrazine-degrading genetic potential and activity in three French agricultural soils.

    PubMed

    Martin-Laurent, Fabrice; Cornet, Laurent; Ranjard, Lionel; López-Gutiérrez, Juan-Carlos; Philippot, Laurent; Schwartz, Christophe; Chaussod, Rémi; Catroux, Gérard; Soulas, Guy

    2004-06-01

    The impact of organic amendment (sewage sludge or waste water) used to fertilize agricultural soils was estimated on the atrazine-degrading activity, the atrazine-degrading genetic potential and the bacterial community structure of soils continuously cropped with corn. Long-term application of organic amendment did not modify atrazine-mineralizing activity, which was found to essentially depend on the soil type. It also did not modify atrazine-degrading genetic potential estimated by quantitative PCR targeting atzA, B and C genes, which was shown to depend on soil type. The structure of soil bacterial community determined by RISA fingerprinting was significantly affected by organic amendment. These results showed that modification of the structure of soil bacterial community in response to organic amendment is not necessarily accompanied by a modification of atrazine-degrading genetic potential or activity. In addition, these results revealed that different soils showing similar atrazine-degrading genetic potentials may exhibit different atrazine-degrading activities.

  1. A high-throughput, quantitative cell-based screen for efficient tailoring of RNA device activity

    PubMed Central

    Liang, Joe C.; Chang, Andrew L.; Kennedy, Andrew B.; Smolke, Christina D.

    2012-01-01

    Recent advances have demonstrated the use of RNA-based control devices to program sophisticated cellular functions; however, the efficiency with which these devices can be quantitatively tailored has limited their broader implementation in cellular networks. Here, we developed a high-efficiency, high-throughput and quantitative two-color fluorescence-activated cell sorting-based screening strategy to support the rapid generation of ribozyme-based control devices with user-specified regulatory activities. The high-efficiency of this screening strategy enabled the isolation of a single functional sequence from a library of over 106 variants within two sorting cycles. We demonstrated the versatility of our approach by screening large libraries generated from randomizing individual components within the ribozyme device platform to efficiently isolate new device sequences that exhibit increased in vitro cleavage rates up to 10.5-fold and increased in vivo activation ratios up to 2-fold. We also identified a titratable window within which in vitro cleavage rates and in vivo gene-regulatory activities are correlated, supporting the importance of optimizing RNA device activity directly in the cellular environment. Our two-color fluorescence-activated cell sorting-based screen provides a generalizable strategy for quantitatively tailoring genetic control elements for broader integration within biological networks. PMID:22810204

  2. A Quantitative Method for Comparing the Brightness of Antibody-dye Reagents and Estimating Antibodies Bound per Cell.

    PubMed

    Kantor, Aaron B; Moore, Wayne A; Meehan, Stephen; Parks, David R

    2016-07-01

    We present a quantitative method for comparing the brightness of antibody-dye reagents and estimating antibodies bound per cell. The method is based on complementary binding of test and fill reagents to antibody capture microspheres. Several aliquots of antibody capture beads are stained with varying amounts of the test conjugate. The remaining binding sites on the beads are then filled with a second conjugate containing a different fluorophore. Finally, the fluorescence of the test conjugate compared to the fill conjugate is used to measure the relative brightness of the test conjugate. The fundamental assumption of the test-fill method is that if it takes X molecules of one test antibody to lower the fill signal by Y units, it will take the same X molecules of any other test antibody to give the same effect. We apply a quadratic fit to evaluate the test-fill signal relationship across different amounts of test reagent. If the fit is close to linear, we consider the test reagent to be suitable for quantitative evaluation of antibody binding. To calibrate the antibodies bound per bead, a PE conjugate with 1 PE molecule per antibody is used as a test reagent and the fluorescence scale is calibrated with Quantibrite PE beads. When the fluorescence per antibody molecule has been determined for a particular conjugate, that conjugate can be used for measurement of antibodies bound per cell. This provides comparisons of the brightness of different conjugates when conducted on an instrument whose statistical photoelectron (Spe) scales are known. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  3. Three-dimensional quantitative structure-activity relationships of steroid aromatase inhibitors

    NASA Astrophysics Data System (ADS)

    Oprea, Tudor I.; García, Angel E.

    1996-06-01

    Inhibition of aromatase, a cytochrome P450 that converts androgens to estrogens, is relevant in the therapeutic control of breast cancer. We investigate this inhibition using a three-dimensional quantitative structure-activity relationship (3D QSAR) method known as Comparative Molecular Field Analysis, CoMFA [Cramer III, R.D. et al., J. Am. Chem. Soc., 110 (1988) 5959]. We analyzed the data for 50 steroid inhibitors [Numazawa, M. et al., J. Med. Chem., 37 (1994) 2198, and references cited therein] assayed against androstenedione on human placental microsomes. An initial CoMFA resulted in a three-component model for log(1/Ki), with an explained variance r2 of 0.885, and a cross-validated q2 of 0.673. Chemometric studies were performed using GOLPE [Baroni, M. et al., Quant. Struct.-Act. Relatsh., 12 (1993) 9]. The CoMFA/GOLPE model is discussed in terms of robustness, predictivity, explanatory power and simplicity. After randomized exclusion of 25 or 10 compounds (repeated 25 times), the q2 for one component was 0.62 and 0.61, respectively, while r2 was 0.674. We demonstrate that the predictive r2 based on the mean activity (Ym) of the training set is misleading, while the test set Ym-based predictive r2 index gives a more accurate estimate of external predictivity. Using CoMFA, the observed differences in aromatase inhibition among C6-substituted steroids are rationalized at the atomic level. The CoMFA fields are consistent with known, potent inhibitors of aromatase, not included in the model. When positioned in the same alignment, these compounds have distinct features that overlap with the steric and electrostatic fields obtained in the CoMFA model. The presence of two hydrophobic binding pockets near the aromatase active site is discussed: a steric bulk tolerant one, common for C4, C6-alpha and C7-alpha substitutents, and a smaller one at the C6-beta region.

  4. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    SciTech Connect

    Kim, Sun Mo; Jaffray, David A.

    2016-01-15

    quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.

  5. Validation of a novel method for retrospectively estimating nutrient intake during pregnancy using a semi-quantitative food frequency questionnaire

    PubMed Central

    Mejía-Rodríguez, Fabiola; Orjuela, Manuela A.; García-Guerra, Armando; Quezada-Sanchez, Amado David; Neufeld, Lynnette M.

    2011-01-01

    Objective Case control studies evaluating the relationship between dietary intake of specific nutrients and risk of congenital, neonatal or early childhood disease require the ability to rank relative maternal dietary intake during pregnancy. Such studies are limited by the lack of validated instruments for assessing gestational dietary intake several years post-partum. This study aims to validate a semi-quantitative interview-administered food frequency questionnaire (FFQ) for retrospectively estimating nutrient intake at two critical time points during pregnancy. Methods The FFQ was administered to women (N=84), who 4 to 6 years earlier had participated in a prospective study to evaluate dietary intake during pregnancy. The FFQ queried participants about intake during the previous month (FFQ-month). This was then used as a reference point to estimate consumption by trimester (FFQ-pregnancy). The resulting data were compared to data collected during the original study from two 24-hour recalls (24hr-original) using Spearman correlation and Wilcoxon sign-rank-test. Results Total energy intake as estimated by the retrospective and original instruments did not differ and was only weakly correlated in the trimesters (1st & 3rd) as a whole (r = 0.18-32), though more strongly correlated when restricted to the first half of the 1st trimester (r=0.32) and later half of the 3rd trimester (r=0.87). After energy adjustment, correlation between the 24hR-original and FFQ-pregnancy in the 3rd trimester were r=0.25 (p<0.05) for dietary intake of vitamin A, and r=0.26 (p<0.05) for folate, and r= 0.23-0.77 (p<0.005) for folate, and vitamins A, B6 and B12 in the 1st and 3rd trimester after including vitamin supplement intake. Conclusions The FFQ-pregnancy provides a consistent estimate of maternal intake of key micronutrients during pregnancy and permits accurate ranking of intake 4-6 years post-partum. PMID:22116778

  6. Quantitative imaging test approval and biomarker qualification: interrelated but distinct activities.

    PubMed

    Buckler, Andrew J; Bresolin, Linda; Dunnick, N Reed; Sullivan, Daniel C; Aerts, Hugo J W L; Bendriem, Bernard; Bendtsen, Claus; Boellaard, Ronald; Boone, John M; Cole, Patricia E; Conklin, James J; Dorfman, Gary S; Douglas, Pamela S; Eidsaunet, Willy; Elsinger, Cathy; Frank, Richard A; Gatsonis, Constantine; Giger, Maryellen L; Gupta, Sandeep N; Gustafson, David; Hoekstra, Otto S; Jackson, Edward F; Karam, Lisa; Kelloff, Gary J; Kinahan, Paul E; McLennan, Geoffrey; Miller, Colin G; Mozley, P David; Muller, Keith E; Patt, Rick; Raunig, David; Rosen, Mark; Rupani, Haren; Schwartz, Lawrence H; Siegel, Barry A; Sorensen, A Gregory; Wahl, Richard L; Waterton, John C; Wolf, Walter; Zahlmann, Gudrun; Zimmerman, Brian

    2011-06-01

    Quantitative imaging biomarkers could speed the development of new treatments for unmet medical needs and improve routine clinical care. However, it is not clear how the various regulatory and nonregulatory (eg, reimbursement) processes (often referred to as pathways) relate, nor is it clear which data need to be collected to support these different pathways most efficiently, given the time- and cost-intensive nature of doing so. The purpose of this article is to describe current thinking regarding these pathways emerging from diverse stakeholders interested and active in the definition, validation, and qualification of quantitative imaging biomarkers and to propose processes to facilitate the development and use of quantitative imaging biomarkers. A flexible framework is described that may be adapted for each imaging application, providing mechanisms that can be used to develop, assess, and evaluate relevant biomarkers. From this framework, processes can be mapped that would be applicable to both imaging product development and to quantitative imaging biomarker development aimed at increasing the effectiveness and availability of quantitative imaging. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10100800/-/DC1. RSNA, 2011

  7. Quantitative Imaging Test Approval and Biomarker Qualification: Interrelated but Distinct Activities

    PubMed Central

    Bresolin, Linda; Dunnick, N. Reed; Sullivan, Daniel C.

    2011-01-01

    Quantitative imaging biomarkers could speed the development of new treatments for unmet medical needs and improve routine clinical care. However, it is not clear how the various regulatory and nonregulatory (eg, reimbursement) processes (often referred to as pathways) relate, nor is it clear which data need to be collected to support these different pathways most efficiently, given the time- and cost-intensive nature of doing so. The purpose of this article is to describe current thinking regarding these pathways emerging from diverse stakeholders interested and active in the definition, validation, and qualification of quantitative imaging biomarkers and to propose processes to facilitate the development and use of quantitative imaging biomarkers. A flexible framework is described that may be adapted for each imaging application, providing mechanisms that can be used to develop, assess, and evaluate relevant biomarkers. From this framework, processes can be mapped that would be applicable to both imaging product development and to quantitative imaging biomarker development aimed at increasing the effectiveness and availability of quantitative imaging. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10100800/-/DC1 PMID:21325035

  8. Quantitation of Gene Expression in Formaldehyde-Fixed and Fluorescence-Activated Sorted Cells

    PubMed Central

    Russell, Julia N.; Clements, Janice E.; Gama, Lucio

    2013-01-01

    Fluorescence-activated cell sorting (FACS) is a sensitive and valuable technique to characterize cellular subpopulations and great advances have been made using this approach. Cells are often fixed with formaldehyde prior to the sorting process to preserve cell morphology and maintain the expression of surface molecules, as well as to ensure safety in the sorting of infected cells. It is widely recognized that formaldehyde fixation alters RNA and DNA structure and integrity, thus analyzing gene expression in these cells has been difficult. We therefore examined the effects of formaldehyde fixation on the stability and quantitation of nucleic acids in cell lines, primary leukocytes and also cells isolated from SIV-infected pigtailed macaques. We developed a method to extract RNA from fixed cells that yielded the same amount of RNA as our common method of RNA isolation from fresh cells. Quantitation of RNA by RT-qPCR in fixed cells was not always comparable with that in unfixed cells. In comparison, when RNA was measured by the probe-based NanoString system, there was no significant difference in RNA quantitation. In addition, we demonstrated that quantitation of proviral DNA in fixed cells by qPCR is comparable to that in unfixed cells when normalized by a single-copy cellular gene. These results provide a systematic procedure to quantitate gene expression in cells that have been fixed with formaldehyde and sorted by FACS. PMID:24023909

  9. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies

    NASA Astrophysics Data System (ADS)

    Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.

    2015-02-01

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  10. Quantitative structure-antifungal activity relationships of some benzohydrazides against Botrytis cinerea.

    PubMed

    Reino, José L; Saiz-Urra, Liane; Hernandez-Galan, Rosario; Aran, Vicente J; Hitchcock, Peter B; Hanson, James R; Gonzalez, Maykel Perez; Collado, Isidro G

    2007-06-27

    Fourteen benzohydrazides have been synthesized and evaluated for their in vitro antifungal activity against the phytopathogenic fungus Botrytis cinerea. The best antifungal activity was observed for the N',N'-dibenzylbenzohydrazides 3b-d and for the N-aminoisoindoline-derived benzohydrazide 5. A quantitative structure-activity relationship (QSAR) study has been developed using a topological substructural molecular design (TOPS-MODE) approach to interpret the antifungal activity of these synthetic compounds. The model described 98.3% of the experimental variance, with a standard deviation of 4.02. The influence of an ortho substituent on the conformation of the benzohydrazides was investigated by X-ray crystallography and supported by QSAR study. Several aspects of the structure-activity relationships are discussed in terms of the contribution of different bonds to the antifungal activity, thereby making the relationships between structure and biological activity more transparent.

  11. Spectral estimators of absorbed photosynthetically active radiation in corn canopies

    NASA Technical Reports Server (NTRS)

    Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.

    1985-01-01

    Most models of crop growth and yield require an estimate of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically active radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a Landsat MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95 percent of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50 percent of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73 percent of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be estimated reliably from spectral reflectance data of crop canopies.

  12. Estimation of photosynthetically active radiation absorbed at the surface

    NASA Astrophysics Data System (ADS)

    Li, Zhanqing; Moreau, Louis; Cihlar, Josef

    1997-12-01

    This paper presents a validation and application of an algorithm by Li and Moreau [1996] for retrieving photosynthetically active radiation (PAR) absorbed at the surface (APARSFC). APARSFC is a key input to estimating PAR absorbed by the green canopy during photosynthesis. Extensive ground-based and space-borne observations collected during the BOREAS experiment in 1994 were processed, colocated, and analyzed. They include downwelling and upwelling PAR observed at three flux towers, aerosol optical depth from ground-based photometers, and satellite reflectance measurements at the top of the atmosphere. The effects of three-dimensional clouds, aerosols, and bidirectional dependence on the retrieval of APARSFC were examined. While the algorithm is simple and has only three input parameters, the comparison between observed and estimated APARSFC shows a small bias error (<10 W m-2) and moderate random error (36 W m-2 for clear, 61 W m-2 for cloudy). Temporal and/or spatial mismatch between satellite and surface observations is a major cause of the random error, especially when broken clouds are present. The algorithm was subsequently employed to map the distribution of monthly mean APARSFC over the 1000×1000 km2 BOREAS region. Considerable spatial variation is found due to variable cloudiness, forest fires, and nonuniform surface albedo.

  13. Spectral estimators of absorbed photosynthetically active radiation in corn canopies

    NASA Technical Reports Server (NTRS)

    Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.

    1985-01-01

    Most models of crop growth and yield require an estimate of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically active radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a Landsat MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95 percent of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50 percent of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73 percent of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be estimated reliably from spectral reflectance data of crop canopies.

  14. Spectral estimators of absorbed photosynthetically active radiation in corn canopies

    NASA Technical Reports Server (NTRS)

    Gallo, K. P.; Daughtry, C. S. T.; Bauer, M. E.

    1984-01-01

    Most models of crop growth and yield require an estimate of canopy leaf area index (LAI) or absorption of radiation. Relationships between photosynthetically active radiation (PAR) absorbed by corn canopies and the spectral reflectance of the canopies were investigated. Reflectance factor data were acquired with a LANDSAT MSS band radiometer. From planting to silking, the three spectrally predicted vegetation indices examined were associated with more than 95% of the variability in absorbed PAR. The relationships developed between absorbed PAR and the three indices were evaluated with reflectance factor data acquired from corn canopies planted in 1979 through 1982. Seasonal cumulations of measured LAI and each of the three indices were associated with greater than 50% of the variation in final grain yields from the test years. Seasonal cumulations of daily absorbed PAR were associated with up to 73% of the variation in final grain yields. Absorbed PAR, cumulated through the growing season, is a better indicator of yield than cumulated leaf area index. Absorbed PAR may be estimated reliably from spectral reflectance data of crop canopies.

  15. Quantitative microbial risk assessment combined with hydrodynamic modelling to estimate the public health risk associated with bathing after rainfall events.

    PubMed

    Eregno, Fasil Ejigu; Tryland, Ingun; Tjomsland, Torulv; Myrmel, Mette; Robertson, Lucy; Heistad, Arve

    2016-04-01

    This study investigated the public health risk from exposure to infectious microorganisms at Sandvika recreational beaches, Norway and dose-response relationships by combining hydrodynamic modelling with Quantitative Microbial Risk Assessment (QMRA). Meteorological and hydrological data were collected to produce a calibrated hydrodynamic model using Escherichia coli as an indicator of faecal contamination. Based on average concentrations of reference pathogens (norovirus, Campylobacter, Salmonella, Giardia and Cryptosporidium) relative to E. coli in Norwegian sewage from previous studies, the hydrodynamic model was used for simulating the concentrations of pathogens at the local beaches during and after a heavy rainfall event, using three different decay rates. The simulated concentrations were used as input for QMRA and the public health risk was estimated as probability of infection from a single exposure of bathers during the three consecutive days after the rainfall event. The level of risk on the first day after the rainfall event was acceptable for the bacterial and parasitic reference pathogens, but high for the viral reference pathogen at all beaches, and severe at Kalvøya-small and Kalvøya-big beaches, supporting the advice of avoiding swimming in the day(s) after heavy rainfall. The study demonstrates the potential of combining discharge-based hydrodynamic modelling with QMRA in the context of bathing water as a tool to evaluate public health risk and support beach management decisions. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Quantitative estimate of fs-laser induced refractive index changes in the bulk of various transparent materials

    NASA Astrophysics Data System (ADS)

    Mermillod-Blondin, A.; Seuthe, T.; Eberstein, M.; Grehn, M.; Bonse, J.; Rosenfeld, A.

    2014-05-01

    Over the past years, many applications based on laser-induced refractive index changes in the volume of transparent materials have been demonstrated. Ultrashort pulse lasers offer the possibility to process bulky transparent materials in three dimensions, suggesting that direct laser writing will play a decisive role in the development of integrated micro-optics. At the present time, applications such as 3D long term data storage or embedded laser marking are already into the phase of industrial development. However, a quantitative estimate of the laser-induced refractive index change is still very challenging to obtain. On another hand, several microscopy techniques have been recently developed to characterize bulk refractive index changes in-situ. They have been mostly applied to biological purposes. Among those, spatial light interference microscopy (SLIM), offers a very good robustness with minimal post acquisition data processing. In this paper, we report on using SLIM to measure fs-laser induced refractive index changes in different common glassy materials, such as fused silica and borofloat glass (B33). The advantages of SLIM over classical phase-contrast microscopy are discussed.

  17. Development and validation of the liquid chromatography-tandem mass spectrometry method for quantitative estimation of candesartan from human plasma

    PubMed Central

    Prajapati, Shailesh T.; Patel, Pratik K.; Patel, Marmik; Chauhan, Vijendra B.; Patel, Chhaganbhai N.

    2011-01-01

    Introduction: A simple and sensitive liquid chromatography-tandem mass spectrometry method was developed and validated for estimation of candesartan in human plasma using the protein precipitation technique. Materials and Methods: The chromatographic separation was performed on reverse phase using a Betasil C8 (100 × 2.1 mm) 5-μm column, mobile phase of methanol:ammonium tri-floro acetate buffer with formic acid (60:40 v/v) and flow rate of 0.45 ml/min. The protonated analyte was quantitated in positive ionization by multiple reaction monitoring with a mass spectrometer. The mass transitions m/z 441.2 → 263.2 and 260.2 → 116.1 were used to measure candesartan by using propranolol as an internal standard. Results: The linearity of the developed method was achieved in the range of 1.2–1030 ng/ml (r2 ≥ 0.9996) for candesartan. Conclusion: The developed method is simple, rapid, accurate, cost-effective and specific; hence, it can be applied for routine analysis in pharmaceutical industries. PMID:23781443

  18. Intercepted photosynthetically active radiation estimated by spectral reflectance

    NASA Technical Reports Server (NTRS)

    Hatfield, J. L.; Asrar, G.; Kanemasu, E. T.

    1984-01-01

    Interception of photosynthetically active radiation (PAR) was evaluated relative to greenness and normalized difference (MSS (7-5)/(7+5) for five planting dates of wheat for 1978-79 and 1979-80 at Phoenix, Arizona. Intercepted PAR was calculated from leaf area index and stage of growth. Linear relatinships were found with greeness and normalized difference with separate relatinships describing growth and senescence of the crop. Normalized difference was significantly better than greenness for all planting dates. For the leaf area growth portion of the season the relation between PAR interception and normalized difference was the same over years and planting dates. For the leaf senescence phase the relationships showed more variability due to the lack of data on light interception in sparse and senescing canopies. Normalized difference could be used to estimate PAR interception throughout a growing season.

  19. Quantitative Analytical Method for the Determination of Biotinidase Activity in Dried Blood Spot Samples.

    PubMed

    Szabó, Eszter; Szatmári, Ildikó; Szőnyi, László; Takáts, Zoltán

    2015-10-20

    Biotinidase activity assay is included in most newborn screening protocols, and the positive results are confirmed by quantitative enzyme activity measurements. In our study, we describe a new quantitative analytical method for the determination of biotinidase activity using the blood sample deposited onto filter paper as the assay medium, by predepositing N-biotinyl-p-aminobenzoic acid onto the standard sample collection paper. The analysis of the assay mixture requires a simple extraction step from a dried blood spot followed by the quantification of product by LC-MS. The method provides a simple and reliable enzyme assay method that enables the rapid diagnosis of biotinidase deficiency (BD). Out of the measured 36 samples, 13 were healthy with lower enzyme activities, 16 were patients with partial BD, and 7 were patients with profound BD with residual activity below 10%. Expression of enzyme activity in percentage of mean activity of negative controls allows comparison of the different techniques. The obtained results are in good agreement with activity data determined from both dried blood spots and serum samples, giving an informative diagnostic value.

  20. Quantitative purity-activity relationships of natural products: the case of anti-tuberculosis active triterpenes from Oplopanax horridus.

    PubMed

    Qiu, Feng; Cai, Geping; Jaki, Birgit U; Lankin, David C; Franzblau, Scott G; Pauli, Guido F

    2013-03-22

    The present study provides an extension of the previously developed concept of purity-activity relationships (PARs) and enables the quantitative evaluation of the effects of multiple minor components on the bioactivity of residually complex natural products. The anti-tuberculosis active triterpenes from the Alaskan ethnobotanical Oplopanax horridus were selected as a case for the development of the quantitative PAR (QPAR) concept. The residual complexity of the purified triterpenes was initially evaluated by 1D- and 2D-NMR and identified as a combination of structurally related and unrelated impurities. Using a biochemometric approach, the qHNMR purity and anti-TB activity of successive chromatographic fractions of O. horridus triterpenes were correlated by linear regression analysis to generate a mathematical QPAR model. The results demonstrate that impurities, such as widely occurring monoglycerides, can have a profound impact on the observed antimycobacterial activity of triterpene-enriched fractions. The QPAR concept is shown to be capable of providing a quantitative assessment in situations where residually complex constitution contributes toward the biological activity of natural products.

  1. A computational quantitative structure-activity relationship study of carbamate anticonvulsants using quantum pharmacological methods.

    PubMed

    Knight, J L; Weaver, D F

    1998-10-01

    A pattern recognition quantitative structure-activity relationship (QSAR) study has been performed to determine the molecular features of carbamate anticonvulsants which influence biological activity. Although carbamates, such as felbamate, have been used to treat epilepsy, their mechanisms of efficacy and toxicity are not completely understood. Quantum and classical mechanics calculations have been exploited to describe 46 carbamate drugs. Employing a principal component analysis and multiple linear regression calculations, five crucial structural descriptors were identified which directly relate to the bioactivity of the carbamate family. With the resulting mathematical model, the biological activity of carbamate analogues can be predicted with 85-90% accuracy.

  2. Quantitative structure-activity studies of octopaminergic agonists and antagonists against nervous system of Locusta migratoria.

    PubMed

    Hirashima, A; Pan, C; Shinkai, K; Tomita, J; Kuwano, E; Taniguchi, E; Eto, M

    1998-07-01

    The quantitative structure activity relationship (QSAR) of octopaminergic agonists and antagonists against the thoracic nerve cord of the migratory locust, Locusta migratoria L., was analyzed using physicochemical parameters and regression analysis. The hydrophobic effect, dipole moment, and shape index were important in terms of Ki: the more hydrophobic, the greater dipole moment, and the smaller shape index of the molecules, the greater the activity. A receptor surface model (RSM) was generated using some subset of the most active structures. Three-dimensional energetics descriptors were calculated from RSM/ligand interaction and these three-dimensional descriptors were used in QSAR analysis. This data set was studied further using molecular shape analysis.

  3. Quantitative end qualitative analysis of the electrical activity of rectus abdominis muscle portions.

    PubMed

    Negrão Filho, R de Faria; Bérzin, F; Souza, G da Cunha

    2003-01-01

    The purpose of this study was to investigate the electrical behavior pattern of the Rectus abdominis muscle by qualitative and quantitative analysis of the electromyographic signal obtained from its superior, medium and inferior portions during dynamic and static activities. Ten voluntaries (aged X = 17.8 years, SD = 1.6) athletic males were studied without history of muscle skeletal disfunction. For the quantitative analysis the RMS (Root Mean Square) values obtained in the electromyographic signal during the isometric exercises were normalized and expressed in maximum voluntary isometric contraction percentages. For the qualitative analysis of the dynamic activity the electromyographic signal was processed by full-wave rectification, linear envelope and normalization (amplitude and time), so that the resulting curve of the processed signal was submitted to descriptive graphic analysis. The results of the quantitative study show that there is not a statistically significant difference among the portions of the muscle. Qualitative analysis demonstrated two aspects: the presence of a common activation electric pattern in the portions of Rectus abdominis muscle and the absence of significant difference in the inclination angles in the electrical activity curve during the isotonic exercises.

  4. Methods for Quantitative Detection of Antibody-induced Complement Activation on Red Blood Cells

    PubMed Central

    Meulenbroek, Elisabeth M.; Wouters, Diana; Zeerleder, Sacha

    2014-01-01

    Antibodies against red blood cells (RBCs) can lead to complement activation resulting in an accelerated clearance via complement receptors in the liver (extravascular hemolysis) or leading to intravascular lysis of RBCs. Alloantibodies (e.g. ABO) or autoantibodies to RBC antigens (as seen in autoimmune hemolytic anemia, AIHA) leading to complement activation are potentially harmful and can be - especially when leading to intravascular lysis - fatal1. Currently, complement activation due to (auto)-antibodies on RBCs is assessed in vitro by using the Coombs test reflecting complement deposition on RBC or by a nonquantitative hemolytic assay reflecting RBC lysis1-4. However, to assess the efficacy of complement inhibitors, it is mandatory to have quantitative techniques. Here we describe two such techniques. First, an assay to detect C3 and C4 deposition on red blood cells that is induced by antibodies in patient serum is presented. For this, FACS analysis is used with fluorescently labeled anti-C3 or anti-C4 antibodies. Next, a quantitative hemolytic assay is described. In this assay, complement-mediated hemolysis induced by patient serum is measured making use of spectrophotometric detection of the released hemoglobin. Both of these assays are very reproducible and quantitative, facilitating studies of antibody-induced complement activation. PMID:24514151

  5. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the

  6. Estimating the toxicities of organic chemicals to bioluminescent bacteria and activated sludge.

    PubMed

    Ren, Shijin; Frymier, Paul D

    2002-10-01

    Toxicity assays based on bioluminescent bacteria have several advantages including a quick response and an easily measured signal. The Shk1 assay is a procedure for wastewater toxicity testing based on the bioluminescent bacterium Shk1. Using the Shk1 assay, the toxicity of 98 organic chemicals were measured and EC50 values were obtained. Quantitative structure-activity relationship (QSAR) models based on the logarithm of the octanol-water partition coefficient (log(Kow)) were developed for individual groups of organic chemicals with different functional groups. The correlation coefficients for different groups of organic compounds varied between 0.69 and 0.99. An overall QSAR model without discriminating the functional groups, which can be used for a quick estimate of the toxicities of organic chemicals, was also developed and model predictions were compared to experimental data. The model accuracy was found to be one order of magnitude from the observed values.

  7. Quantitative assessment on soil enzyme activities of heavy metal contaminated soils with various soil properties.

    PubMed

    Xian, Yu; Wang, Meie; Chen, Weiping

    2015-11-01

    Soil enzyme activities are greatly influenced by soil properties and could be significant indicators of heavy metal toxicity in soil for bioavailability assessment. Two groups of experiments were conducted to determine the joint effects of heavy metals and soil properties on soil enzyme activities. Results showed that arylsulfatase was the most sensitive soil enzyme and could be used as an indicator to study the enzymatic toxicity of heavy metals under various soil properties. Soil organic matter (SOM) was the dominant factor affecting the activity of arylsulfatase in soil. A quantitative model was derived to predict the changes of arylsulfatase activity with SOM content. When the soil organic matter content was less than the critical point A (1.05% in our study), the arylsulfatase activity dropped rapidly. When the soil organic matter content was greater than the critical point A, the arylsulfatase activity gradually rose to higher levels showing that instead of harm the soil microbial activities were enhanced. The SOM content needs to be over the critical point B (2.42% in our study) to protect its microbial community from harm due to the severe Pb pollution (500mgkg(-1) in our study). The quantitative model revealed the pattern of variation of enzymatic toxicity due to heavy metals under various SOM contents. The applicability of the model under wider soil properties need to be tested. The model however may provide a methodological basis for ecological risk assessment of heavy metals in soil.

  8. Quantitative structure activity relationship analysis of canonical inhibitors of serine proteases

    NASA Astrophysics Data System (ADS)

    Dell'Orco, Daniele; De Benedetti, Pier Giuseppe

    2008-06-01

    Correlation analysis was carried out between binding affinity data values from the literature and physicochemical molecular descriptors of two series of single point mutated canonical inhibitors of serine proteases, namely bovine pancreatic trypsin inhibitor (BPTI) and turkey ovomucoid third domain (OMTKY3), toward seven enzymes. Simple quantitative structure-activity relationship (QSAR) models based on either single or double linear regressions (SLR or DLR) were obtained, which highlight the role of hydrophobic and bulk/polarizability features of mutated amino acids of the inhibitors in modulating both affinity and specificity. The utility of the QSAR paradigm applied to the analysis of mutagenesis data was underlined, resulting in a simple tool to quantitatively help deciphering structure-function/activity relationships (SFAR) of different protein systems.

  9. Towards cheminformatics-based estimation of drug therapeutic index: Predicting the protective index of anticonvulsants using a new quantitative structure-index relationship approach.

    PubMed

    Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung

    2016-06-01

    The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates.

  10. A Combinational Strategy of Model Disturbance and Outlier Comparison to Define Applicability Domain in Quantitative Structural Activity Relationship.

    PubMed

    Yan, Jun; Zhu, Wei-Wei; Kong, Bo; Lu, Hong-Bing; Yun, Yong-Huan; Huang, Jian-Hua; Liang, Yi-Zeng

    2014-08-01

    In order to define an applicability domain for quantitative structure-activity relationship modeling, a combinational strategy of model disturbance and outlier comparison is developed. An indicator named model disturbance index was defined to estimate the prediction error. Moreover, the information of the outliers in the training set was used to filter the unreliable samples in the test set based on "structural similarity". Chromatography retention indices data were used to investigate this approach. The relationship between model disturbance index and prediction error can be found. Also, the comparison between the outlier set and the test set could provide additional information about which unknown samples should be paid more attentions. A novel technique based on model population analysis was used to evaluate the validity of applicability domain. Finally, three commonly used methods, i.e. Leverage, descriptor range-based and model perturbation method, were compared with the proposed approach. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Promises and pitfalls of quantitative structure-activity relationship approaches for predicting metabolism and toxicity.

    PubMed

    Zvinavashe, Elton; Murk, Albertinka J; Rietjens, Ivonne M C M

    2008-12-01

    The description of quantitative structure-activity relationship (QSAR) models has been a topic for scientific research for more than 40 years and a topic within the regulatory framework for more than 20 years. At present, efforts on QSAR development are increasing because of their promise for supporting reduction, refinement, and/or replacement of animal toxicity experiments. However, their acceptance in risk assessment seems to require a more standardized and scientific underpinning of QSAR technology to avoid possible pitfalls. For this reason, guidelines for QSAR model development recently proposed by the Organization for Economic Cooperation and Development (OECD) [Organization for Economic Cooperation and Development (OECD) (2007) Guidance document on the validation of (quantitative) structure-activity relationships [(Q)SAR] models. OECD Environment Health and Safety Publications: Series on Testing and Assessment No. 69, Paris] are expected to help increase the acceptability of QSAR models for regulatory purposes. The guidelines recommend that QSAR models should be associated with (i) a defined end point, (ii) an unambiguous algorithm, (iii) a defined domain of applicability, (iv) appropriate measures of goodness-of-fit, robustness, and predictivity, and (v) a mechanistic interpretation, if possible [Organization for Economic Cooperation and Development (OECD) (2007) Guidance document on the validation of (quantitative) structure-activity relationships [(Q)SAR] models. The present perspective provides an overview of these guidelines for QSAR model development and their rationale, as well as the promises and pitfalls of using QSAR approaches and these guidelines for predicting metabolism and toxicity of new and existing chemicals.

  12. Work sampling: a quantitative analysis of nursing activity in a neuro-rehabilitation setting.

    PubMed

    Williams, Heather; Harris, Ruth; Turner-Stokes, Lynne

    2009-10-01

    The aim of this investigation was to establish the distribution and proportion of nursing activity represented by patient-related care activities (direct and indirect), and other nursing activities (unit-related and personal) within one inpatient neurological rehabilitation unit. A set of tools has been developed for estimating the care/nursing hours required for direct hands-on patient care in hospital rehabilitation settings. However, to apply this information to estimate the actual staffing requirements in relation to a given caseload, it is necessary to know the proportion of nursing workload assigned to other activities and how this may vary throughout the day. A work sampling study was conducted during 2004. A snapshot of nursing activity was recorded at 5-minute intervals from 0600 to 2355 spread over 2 weeks, with one session from 0600 to 1525 and the second from 1530 to 2355. A total of 8883 nursing activities were observed and recorded over 126 hours and categorized as follows: 4060 (46%) direct patient care, 2218 (25%) indirect patient care, 874 (10%) unit-related and 1731 (19%) personal time. The proportions of direct care fluctuated throughout the day, with direct care activities mainly concentrated in early mornings and to a lesser extent evenings. Direct patient care accounted for less than half of the nursing activity in a rehabilitation setting. Estimates of staffing requirement must also take account of the time required for indirect care and non-patient related activity.

  13. Quantitative telomerase enzyme activity determination using droplet digital PCR with single cell resolution

    PubMed Central

    Ludlow, Andrew T.; Robin, Jerome D.; Sayed, Mohammed; Litterst, Claudia M.; Shelton, Dawne N.; Shay, Jerry W.; Wright, Woodring E.

    2014-01-01

    The telomere repeat amplification protocol (TRAP) for the human reverse transcriptase, telomerase, is a PCR-based assay developed two decades ago and is still used for routine determination of telomerase activity. The TRAP assay can only reproducibly detect ∼2-fold differences and is only quantitative when compared to internal standards and reference cell lines. The method generally involves laborious radioactive gel electrophoresis and is not conducive to high-throughput analyzes. Recently droplet digital PCR (ddPCR) technologies have become available that allow for absolute quantification of input deoxyribonucleic acid molecules following PCR. We describe the reproducibility and provide several examples of a droplet digital TRAP (ddTRAP) assay for telomerase activity, including quantitation of telomerase activity in single cells, telomerase activity across several common telomerase positive cancer cells lines and in human primary peripheral blood mononuclear cells following mitogen stimulation. Adaptation of the TRAP assay to digital format allows accurate and reproducible quantification of the number of telomerase-extended products (i.e. telomerase activity; 57.8 ± 7.5) in a single HeLa cell. The tools developed in this study allow changes in telomerase enzyme activity to be monitored on a single cell basis and may have utility in designing novel therapeutic approaches that target telomerase. PMID:24861623

  14. Chemometrics-based approach to modeling quantitative composition-activity relationships for Radix Tinosporae.

    PubMed

    Yan, Shi-Kai; Lin, Zhong-Ying; Dai, Wei-Xing; Shi, Qi-Rong; Liu, Xiao-Hua; Jin, Hui-Zi; Zhang, Wei-Dong

    2010-09-01

    Quantitative composition-activity relationship (QCAR) study makes it possible to discover active components in traditional Chinese medicine (TCM) and to predict the integral bioactivity by its chemical composition. In the study, 28 samples of Radix Tinosporae were quantitatively analyzed by high performance liquid chromatography, and their analgesic activities were investigated via abdominal writhing tests on mice. Three genetic algorithms (GA) based approaches including partial least square regression, radial basis function neural network, and support vector regression (SVR) were established to construct QCAR models of R. Tinosporae. The result shows that GA-SVR has the best model performance in the bioactivity prediction of R. Tinosporae; seven major components thereof were discovered to have analgesic activities, and the analgesic activities of these components were partly confirmed by subsequent abdominal writhing test. The proposed approach allows discovering active components in TCM and predicting bioactivity by its chemical composition, and is expected to be utilized as a supplementary tool for the quality control and drug discovery of TCM.

  15. Quantitative network signal combinations downstream of TCR activation can predict IL-2 production response.

    PubMed

    Kemp, Melissa L; Wille, Lucia; Lewis, Christina L; Nicholson, Lindsay B; Lauffenburger, Douglas A

    2007-04-15

    Proximal signaling events activated by TCR-peptide/MHC (TCR-pMHC) binding have been the focus of intense ongoing study, but understanding how the consequent downstream signaling networks integrate to govern ultimate avidity-appropriate TCR-pMHC T cell responses remains a crucial next challenge. We hypothesized that a quantitative combination of key downstream network signals across multiple pathways must encode the information generated by TCR activation, providing the basis for a quantitative model capable of interpreting and predicting T cell functional responses. To this end, we measured 11 protein nodes across six downstream pathways, along five time points from 10 min to 4 h, in a 1B6 T cell hybridoma stimulated by a set of three myelin proteolipid protein 139-151 altered peptide ligands. A multivariate regression model generated from this data compendium successfully comprehends the various IL-2 production responses and moreover successfully predicts a priori the response to an additional peptide treatment, demonstrating that TCR binding information is quantitatively encoded in the downstream network. Individual node and/or time point measurements less effectively accounted for the IL-2 responses, indicating that signals must be integrated dynamically across multiple pathways to adequately represent the encoded TCR signaling information. Of further importance, the model also successfully predicted a priori direct experimental tests of the effects of individual and combined inhibitors of the MEK/ERK and PI3K/Akt pathways on this T cell response. Together, our findings show how multipathway network signals downstream of TCR activation quantitatively integrate to translate pMHC stimuli into functional cell responses.

  16. Quantitative Evaluation of Landsat 7 ETM+ SLC-off Images for Surface Velocity Estimation of Mountain Glaciers

    NASA Astrophysics Data System (ADS)

    Jiang, L.; Sun, Y.; Liu, L.; Wang, S.; Wang, H.

    2014-12-01

    In many cases the Landsat mission series (Landsat 1-5, 7 and 8) provide our only detailed and consistent data source for mapping the global glacier changes over the last 40 years. However, the scan-line corrector (SLC) of the ETM+ sensor on board Landsat 7 permanently failed, resulting in wedge-shaped data gaps in SLC-off images that caused roughly 22% of the pixels to be missed. The SLC failure has left a serious problem for the glacial applications of ETM+ data, particularly for monitoring long-term glacier dynamics in High Asian Mountain where has few available data due to the frequently cloudy covers. This study aims to evaluate the potential of the Landsat 7 SLC-off images in deriving surface velocities of mountain glaciers. A pair of SLC-off images over the Siachen glacier acquired in Aug 2009 and 2010 was used for this purpose. Firstly, two typical filling-gap methods, the localized linear histogram match (LLHM) and the weighted liner regression (WLR), were utilized to recover the mentioned SLC-off images. Subsequently these recovered pairs were applied for deriving glacier-surface velocities with the COSI-Corr feature tracking procedure. Finally, the glacier velocity results were quantitatively compared with that of a pair of Landsat-5 TM images acquired nearly at the same time with the SLC-off pair. Our results show that (1) the WLR method achieves a better performance of gap recovering than the LLHM method, (2) the surface velocities estimated with the recovered SLC-off images are highly agreement with those of the TM images, and (3) the annual mean velocity of the Siachen glacier is approximately 70 m/yr between 2009 and 2010 with a maximum of 280 m/yr close to the glacial equilibrium line that are similar with the results in previous studies. Therefore, if a suitable filling-gap method is adopted, e.g. the WLR method, it is highly feasible that the ETM+ SLC-off data can be utilized to estimate the surface velocities of mountain glaciers.

  17. Ranking of hair dye substances according to predicted sensitization potency: quantitative structure-activity relationships.

    PubMed

    Søsted, H; Basketter, D A; Estrada, E; Johansen, J D; Patlewicz, G Y

    2004-01-01

    Allergic contact dermatitis following the use of hair dyes is well known. Many chemicals are used in hair dyes and it is unlikely that all cases of hair dye allergy can be diagnosed by means of patch testing with p-phenylenediamine (PPD). The objectives of this study are to identify all hair dye substances registered in Europe and to provide their tonnage data. The sensitization potential of each substance was then estimated by using a quantitative structure-activity relationship (QSAR) model and the substances were ranked according to their predicted potency. A cluster analysis was performed in order to help select a number of chemically diverse hair dye substances that could be used in subsequent clinical work. Various information sources, including the Inventory of Cosmetics Ingredients, new regulations on cosmetics, data on total use and ChemId (the Chemical Search Input website provided by the National Library of Medicine), were used in order to identify the names and structures of the hair dyes. A QSAR model, developed with the help of experimental local lymph node assay data and topological sub-structural molecular descriptors (TOPS-MODE), was used in order to predict the likely sensitization potential. Predictions for sensitization potential were made for the 229 substances that could be identified by means of a chemical structure, the majority of these hair dyes (75%) being predicted to be strong/moderate sensitizers. Only 22% were predicted to be weak sensitizers and 3% were predicted to be extremely weak or non-sensitizing. Eight of the most widely used hair dye substances were predicted to be strong/moderate sensitizers, including PPD - which is the most commonly used hair dye allergy marker in patch testing. A cluster analysis by using TOPS-MODE descriptors as inputs helped us group the hair dye substances according to their chemical similarity. This would facilitate the selection of potential substances for clinical patch testing. A patch-test series

  18. Finding Biomass Degrading Enzymes Through an Activity-Correlated Quantitative Proteomics Platform (ACPP)

    NASA Astrophysics Data System (ADS)

    Ma, Hongyan; Delafield, Daniel G.; Wang, Zhe; You, Jianlan; Wu, Si

    2017-04-01

    The microbial secretome, known as a pool of biomass (i.e., plant-based materials) degrading enzymes, can be utilized to discover industrial enzyme candidates for biofuel production. Proteomics approaches have been applied to discover novel enzyme candidates through comparing protein expression profiles with enzyme activity of the whole secretome under different growth conditions. However, the activity measurement of each enzyme candidate is needed for confident "active" enzyme assignments, which remains to be elucidated. To address this challenge, we have developed an Activity-Correlated Quantitative Proteomics Platform (ACPP) that systematically correlates protein-level enzymatic activity patterns and protein elution profiles using a label-free quantitative proteomics approach. The ACPP optimized a high performance anion exchange separation for efficiently fractionating complex protein samples while preserving enzymatic activities. The detected enzymatic activity patterns in sequential fractions using microplate-based assays were cross-correlated with protein elution profiles using a customized pattern-matching algorithm with a correlation R-score. The ACPP has been successfully applied to the identification of two types of "active" biomass-degrading enzymes (i.e., starch hydrolysis enzymes and cellulose hydrolysis enzymes) from Aspergillus niger secretome in a multiplexed fashion. By determining protein elution profiles of 156 proteins in A. niger secretome, we confidently identified the 1,4-α-glucosidase as the major "active" starch hydrolysis enzyme (R = 0.96) and the endoglucanase as the major "active" cellulose hydrolysis enzyme (R = 0.97). The results demonstrated that the ACPP facilitated the discovery of bioactive enzymes from complex protein samples in a high-throughput, multiplexing, and untargeted fashion.

  19. Finding Biomass Degrading Enzymes Through an Activity-Correlated Quantitative Proteomics Platform (ACPP)

    NASA Astrophysics Data System (ADS)

    Ma, Hongyan; Delafield, Daniel G.; Wang, Zhe; You, Jianlan; Wu, Si

    2017-01-01

    The microbial secretome, known as a pool of biomass (i.e., plant-based materials) degrading enzymes, can be utilized to discover industrial enzyme candidates for biofuel production. Proteomics approaches have been applied to discover novel enzyme candidates through comparing protein expression profiles with enzyme activity of the whole secretome under different growth conditions. However, the activity measurement of each enzyme candidate is needed for confident "active" enzyme assignments, which remains to be elucidated. To address this challenge, we have developed an Activity-Correlated Quantitative Proteomics Platform (ACPP) that systematically correlates protein-level enzymatic activity patterns and protein elution profiles using a label-free quantitative proteomics approach. The ACPP optimized a high performance anion exchange separation for efficiently fractionating complex protein samples while preserving enzymatic activities. The detected enzymatic activity patterns in sequential fractions using microplate-based assays were cross-correlated with protein elution profiles using a customized pattern-matching algorithm with a correlation R-score. The ACPP has been successfully applied to the identification of two types of "active" biomass-degrading enzymes (i.e., starch hydrolysis enzymes and cellulose hydrolysis enzymes) from Aspergillus niger secretome in a multiplexed fashion. By determining protein elution profiles of 156 proteins in A. niger secretome, we confidently identified the 1,4-α-glucosidase as the major "active" starch hydrolysis enzyme (R = 0.96) and the endoglucanase as the major "active" cellulose hydrolysis enzyme (R = 0.97). The results demonstrated that the ACPP facilitated the discovery of bioactive enzymes from complex protein samples in a high-throughput, multiplexing, and untargeted fashion.

  20. Finding Biomass Degrading Enzymes Through an Activity-Correlated Quantitative Proteomics Platform (ACPP).

    PubMed

    Ma, Hongyan; Delafield, Daniel G; Wang, Zhe; You, Jianlan; Wu, Si

    2017-04-01

    The microbial secretome, known as a pool of biomass (i.e., plant-based materials) degrading enzymes, can be utilized to discover industrial enzyme candidates for biofuel production. Proteomics approaches have been applied to discover novel enzyme candidates through comparing protein expression profiles with enzyme activity of the whole secretome under different growth conditions. However, the activity measurement of each enzyme candidate is needed for confident "active" enzyme assignments, which remains to be elucidated. To address this challenge, we have developed an Activity-Correlated Quantitative Proteomics Platform (ACPP) that systematically correlates protein-level enzymatic activity patterns and protein elution profiles using a label-free quantitative proteomics approach. The ACPP optimized a high performance anion exchange separation for efficiently fractionating complex protein samples while preserving enzymatic activities. The detected enzymatic activity patterns in sequential fractions using microplate-based assays were cross-correlated with protein elution profiles using a customized pattern-matching algorithm with a correlation R-score. The ACPP has been successfully applied to the identification of two types of "active" biomass-degrading enzymes (i.e., starch hydrolysis enzymes and cellulose hydrolysis enzymes) from Aspergillus niger secretome in a multiplexed fashion. By determining protein elution profiles of 156 proteins in A. niger secretome, we confidently identified the 1,4-α-glucosidase as the major "active" starch hydrolysis enzyme (R = 0.96) and the endoglucanase as the major "active" cellulose hydrolysis enzyme (R = 0.97). The results demonstrated that the ACPP facilitated the discovery of bioactive enzymes from complex protein samples in a high-throughput, multiplexing, and untargeted fashion. Graphical Abstract ᅟ.

  1. Quantitative estimation of Tropical Rainfall Mapping Mission precipitation radar signals from ground-based polarimetric radar observations

    NASA Astrophysics Data System (ADS)

    Bolen, Steven M.; Chandrasekar, V.

    2003-06-01

    The Tropical Rainfall Mapping Mission (TRMM) is the first mission dedicated to measuring rainfall from space using radar. The precipitation radar (PR) is one of several instruments aboard the TRMM satellite that is operating in a nearly circular orbit with nominal altitude of 350 km, inclination of 35°, and period of 91.5 min. The PR is a single-frequency Ku-band instrument that is designed to yield information about the vertical storm structure so as to gain insight into the intensity and distribution of rainfall. Attenuation effects on PR measurements, however, can be significant and as high as 10-15 dB. This can seriously impair the accuracy of rain rate retrieval algorithms derived from PR signal returns. Quantitative estimation of PR attenuation is made along the PR beam via ground-based polarimetric observations to validate attenuation correction procedures used by the PR. The reflectivity (Zh) at horizontal polarization and specific differential phase (Kdp) are found along the beam from S-band ground radar measurements, and theoretical modeling is used to determine the expected specific attenuation (k) along the space-Earth path at Ku-band frequency from these measurements. A theoretical k-Kdp relationship is determined for rain when Kdp ≥ 0.5°/km, and a power law relationship, k = a Zhb, is determined for light rain and other types of hydrometers encountered along the path. After alignment and resolution volume matching is made between ground and PR measurements, the two-way path-integrated attenuation (PIA) is calculated along the PR propagation path by integrating the specific attenuation along the path. The PR reflectivity derived after removing the PIA is also compared against ground radar observations.

  2. Quantitative precipitation estimates for the northeastern Qinghai-Tibetan Plateau over the last 18,000 years

    NASA Astrophysics Data System (ADS)

    Li, Jianyong; Dodson, John; Yan, Hong; Cheng, Bo; Zhang, Xiaojian; Xu, Qinghai; Ni, Jian; Lu, Fengyan

    2017-05-01

    Quantitative information regarding the long-term variability of precipitation and vegetation during the period covering both the Late Glacial and the Holocene on the Qinghai-Tibetan Plateau (QTP) is scarce. Herein, we provide new and numerical reconstructions for annual mean precipitation (PANN) and vegetation history over the last 18,000 years using high-resolution pollen data from Lakes Dalianhai and Qinghai on the northeastern QTP. Hitherto, five calibration techniques including weighted averaging, weighted average-partial least squares regression, modern analogue technique, locally weighted weighted averaging regression, and maximum likelihood were first employed to construct robust inference models and to produce reliable PANN estimates on the QTP. The biomization method was applied for reconstructing the vegetation dynamics. The study area was dominated by steppe and characterized with a highly variable, relatively dry climate at 18,000-11,000 cal years B.P. PANN increased since the early Holocene, obtained a maximum at 8000-3000 cal years B.P. with coniferous-temperate mixed forest as the dominant biome, and thereafter declined to present. The PANN reconstructions are broadly consistent with other proxy-based paleoclimatic records from the northeastern QTP and the northern region of monsoonal China. The possible mechanisms behind the precipitation changes may be tentatively attributed to the internal feedback processes of higher latitude (e.g., North Atlantic) and lower latitude (e.g., subtropical monsoon) competing climatic regimes, which are primarily modulated by solar energy output as the external driving force. These findings may provide important insights into understanding the future Asian precipitation dynamics under the projected global warming.

  3. Quantitative Estimation of Ising-Type Magnetic Anisotropy in a Family of C3 -Symmetric Co(II) Complexes.

    PubMed

    Mondal, Amit Kumar; Jover, Jesús; Ruiz, Eliseo; Konar, Sanjit

    2017-09-12

    In this paper, the influence of the structural and chemical effects on the Ising-type magnetic anisotropy of pentacoordinate Co(II) complexes has been investigated by using a combined experimental and theoretical approach. For this, a deliberate design and synthesis of four pentacoordinate Co(II) complexes [Co(tpa)Cl]⋅ClO4 (1), [Co(tpa)Br]⋅ClO4 (2), [Co(tbta)Cl]⋅(ClO4 )⋅(MeCN)2 ⋅(H2 O) (3) and [Co(tbta)Br]⋅ClO4 (4) by using the tripodal ligands tris(2-methylpyridyl)amine (tpa) and tris[(1-benzyl-1 H-1,2,3-triazole-4-yl)methyl]amine) (tbta) have been carried out. Detailed dc and ac measurements show the existence of field-induced slow magnetic relaxation behavior of Co(II) centers with Ising-type magnetic anisotropy. A quantitative estimation of the zero-field splitting (ZFS) parameters has been effectively achieved by using detailed ab initio theory calculations. Computational studies reveal that the wavefunction of all the studied complexes has a very strong multiconfigurational character that stabilizes the largest ms =±3/2 components of the quartet state and hence produce a large negative contribution to the ZFS parameters. The difference in the magnitudes of the Ising-type anisotropy can be explained through ligand field theory considerations, that is, D is larger and negative in the case of weak equatorial σ-donating and strong apical π-donating ligands. To elucidate the role of intermolecular interactions in the magnetic relaxation behavior between adjacent Co(II) centers, a diamagnetic isostructural Zn(II) analog (5) was synthesized and the magnetic dilution experiment was performed. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Acaricidal and quantitative structure activity relationship of monoterpenes against the two-spotted spider mite, Tetranychus urticae.

    PubMed

    Badawy, Mohamed E I; El-Arami, Sailan A A; Abdelgaleil, Samir A M

    2010-11-01

    The acaricidal activity of 12 monoterpenes against the two-spotted spider mite, Tetranychus urticae Koch, was examined using fumigation and direct contact application methods. Cuminaldehyde and (-)-linalool showed the highest fumigant toxicity with LC(50) = 0.31 and 0.56 mg/l, respectively. The other monoterpenes exhibited a strong fumigant toxicity, the LC(50) values ranging from 1.28 to 8.09 mg/l, except camphene, which was the least effective (LC(50) = 61.45 mg/l). Based on contact activity, the results were rather different: menthol displayed the highest acaricidal activity (LC(50) = 128.53 mg/l) followed by thymol (172.0 mg/l), geraniol (219.69 mg/l) and (-)-limonene (255.44 mg/l); 1-8-cineole, cuminaldehyde and (-)-linalool showed moderate toxicity. At 125 mg/l, (-)-Limonene and (-)-carvone caused the highest egg mortality among the tested compounds (70.6 and 66.9% mortality, respectively). In addition, the effect of molecular descriptors was also analyzed using the quantitative structure activity relationship (QSAR) procedure. The QSAR model showed excellent agreement between the estimated and experimentally measured toxicity parameter (LC(50)) for the tested monoterpenes and the fumigant activity increased significantly with the vapor pressure. Comparing the results of the fumigant and contact toxicity assays of monoterpenes against T. urticae with the results of acetylcholinesterase (AChE) inhibitory effect revealed that some of the tested compounds showed a strong acaricidal activity and a potent AChE inhibitory activity, such as cuminaldehyde, (-)-linalool, (-)-limonene and menthol. However, other compounds such as (-)-carvone revealed a strong fumigant activity but a weak AChE inhibitory activity.

  5. An assessment of the reliability of quantitative genetics estimates in study systems with high rate of extra-pair reproduction and low recruitment.

    PubMed

    Bourret, A; Garant, D

    2017-03-01

    Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.

  6. [Research on TLC identification and anti-coagulant activity quantitatively methods of Scolopendra subspinipes mutilans].

    PubMed

    Li, Tao; Tan, Xiao-mei; Long, Qun; Chen, Fei-long

    2012-05-01

    To improve the quality standard of Scolopendra subspinipes mutilans by researching the methods of the TLC identification and anti-coagulant activity quantitatively. Identified the free arginine (Arg) and serine (Ser) in scolopendra by TLC, screened the samples preparation process and developed solvent systems; Determined the anti-coagulant activity by method of titration with thrombin and screened the pretreatment methods. When medicinal materials was extracted by formic acid and 95% ethanol (1:1) with ultrasonic method and developed by n-butanol-acetic acid-water (12:5:4), the spots of Arg and Ser were well separated. Ultrasonic method was suitable for preparation of the anti-coagulant components in Scolopendra subspinipes mutilans and their anti-coagulant activity was determined by method of titration with thrombin could get a well reproducibility, the anti-thrombin activity of testing sample was (14.00 +/- 1.53) U/g and those of three different batch were (13.00 +/- 0.58) U/g, (17.00 +/- 1.15) U/g, (15.67 +/- 1.53) U/g respectively. The methods of TLC identification and anti-coagulant activity quantitatively could be used as a basis for improving the quality standard of Scolopendra subspinipes mutilans.

  7. Rapid and quantitative measuring of telomerase activity using an electrochemiluminescent sensor

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaoming; Xing, Da; Zhu, Debin; Jia, Li

    2007-11-01

    Telomerase, a ribonucleoprotein enzyme that adds telomeric repeats to the 3'end of chromosomal DNA for maintaining chromosomal integrity and stability. This strong association of telomerase activity with tumors establishing it is the most widespread cancer marker. A number of assays based on the polymerase chain reaction (PCR) have been developed for the evaluation of telomerase activity. However, those methods require gel electrophoresis and some staining procedures. We developed an electrochemiluminescent (ECL) sensor for the measuring of telomerase activity to overcome these problems such as troublesome post-PCR procedures and semi-quantitative assessment in the conventional method. In this assay 5'-biotinylated telomerase synthesis (TS) primer serve as the substrate for the extension of telomeric repeats under telomerase. The extension products were amplified with this TS primer and a tris-(2'2'-bipyridyl) ruthenium (TBR)-labeled reversed primer. The amplified products was separated and enriched in the surface of electrode by streptavidin-coated magnetic beads, and detected by measuring the ECL signals of the TBR labeled. Measuring telomerase activity use the sensor is easy, sensitive, rapid, and applicable to quantitative analysis, should be clinically useful for the detection and monitoring of telomerase activity.

  8. Altered resting-state functional activity in posttraumatic stress disorder: A quantitative meta-analysis

    PubMed Central

    Wang, Ting; Liu, Jia; Zhang, Junran; Zhan, Wang; Li, Lei; Wu, Min; Huang, Hua; Zhu, Hongyan; Kemp, Graham J.; Gong, Qiyong

    2016-01-01

    Many functional neuroimaging studies have reported differential patterns of spontaneous brain activity in posttraumatic stress disorder (PTSD), but the findings are inconsistent and have not so far been quantitatively reviewed. The present study set out to determine consistent, specific regional brain activity alterations in PTSD, using the Effect Size Signed Differential Mapping technique to conduct a quantitative meta-analysis of resting-state functional neuroimaging studies of PTSD that used either a non-trauma (NTC) or a trauma-exposed (TEC) comparison control group. Fifteen functional neuroimaging studies were included, comparing 286 PTSDs, 203 TECs and 155 NTCs. Compared with NTC, PTSD patients showed hyperactivity in the right anterior insula and bilateral cerebellum, and hypoactivity in the dorsal medial prefrontal cortex (mPFC); compared with TEC, PTSD showed hyperactivity in the ventral mPFC. The pooled meta-analysis showed hypoactivity in the posterior insula, superior temporal, and Heschl’s gyrus in PTSD. Additionally, subgroup meta-analysis (non-medicated subjects vs. NTC) identified abnormal activation in the prefrontal-limbic system. In meta-regression analyses, mean illness duration was positively associated with activity in the right cerebellum (PTSD vs. NTC), and illness severity was negatively associated with activity in the right lingual gyrus (PTSD vs. TEC). PMID:27251865

  9. Quantitative estimation of IL-6 in serum/plasma samples using a rapid and cost-effective fiber optic dip-probe

    NASA Astrophysics Data System (ADS)

    Wang, Chun-Wei; Manne, Upender; Reddy, Vishnu B.; Kapoor, Rakesh

    2010-02-01

    A rapid and cost-effective combination tapered fiber-optic biosensor (CTFOB) dip-probe was used for quantitative estimation of interleukin (IL)-6 in serum/plasma samples. Sandwich immunoassay was used as the detection technique. Probes could successfully detect presence of IL-6 in two serum samples, non-neoplastic autoimmune patient (lupus) sample and lymphoma patient sample. The estimated amount of IL-6 in lupus patient sample was 4.8 +/- 0.9 pM and in lymphoma patient sample was 2 +/- 1 pM. It is demonstrated that the developed CTFOB dip-probe is capable of quantitative estimation of proteins in serum/plasma samples with high specificity.

  10. Impact of high (131)I-activities on quantitative (124)I-PET.

    PubMed

    Braad, P E N; Hansen, S B; Høilund-Carlsen, P F

    2015-07-07

    Peri-therapeutic (124)I-PET/CT is of interest as guidance for radioiodine therapy. Unfortunately, image quality is complicated by dead time effects and increased random coincidence rates from high (131)I-activities. A series of phantom experiments with clinically relevant (124)I/(131)I-activities were performed on a clinical PET/CT-system.