Sample records for present quantitative estimates

  1. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  2. The Effect of Pickling on Blue Borscht Gelatin and Other Interesting Diffusive Phenomena.

    ERIC Educational Resources Information Center

    Davis, Lawrence C.; Chou, Nancy C.

    1998-01-01

    Presents some simple demonstrations that students can construct for themselves in class to learn the difference between diffusion and convection rates. Uses cabbage leaves and gelatin and focuses on diffusion in ungelified media, a quantitative diffusion estimate with hydroxyl ions, and a quantitative diffusion estimate with photons. (DDR)

  3. Repeatability Assessment by ISO 11843-7 in Quantitative HPLC for Herbal Medicines.

    PubMed

    Chen, Liangmian; Kotani, Akira; Hakamata, Hideki; Tsutsumi, Risa; Hayashi, Yuzuru; Wang, Zhimin; Kusu, Fumiyo

    2015-01-01

    We have proposed an assessment methods to estimate the measurement relative standard deviation (RSD) of chromatographic peaks in quantitative HPLC for herbal medicines by the methodology of ISO 11843 Part 7 (ISO 11843-7:2012), which provides detection limits stochastically. In quantitative HPLC with UV detection (HPLC-UV) of Scutellaria Radix for the determination of baicalin, the measurement RSD of baicalin by ISO 11843-7:2012 stochastically was within a 95% confidence interval of the statistically obtained RSD by repetitive measurements (n = 6). Thus, our findings show that it is applicable for estimating of the repeatability of HPLC-UV for determining baicalin without repeated measurements. In addition, the allowable limit of the "System repeatability" in "Liquid Chromatography" regulated in a pharmacopoeia can be obtained by the present assessment method. Moreover, the present assessment method was also successfully applied to estimate the measurement RSDs of quantitative three-channel liquid chromatography with electrochemical detection (LC-3ECD) of Chrysanthemi Flos for determining caffeoylquinic acids and flavonoids. By the present repeatability assessment method, reliable measurement RSD was obtained stochastically, and the experimental time was remarkably reduced.

  4. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  5. On sweat analysis for quantitative estimation of dehydration during physical exercise.

    PubMed

    Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M

    2015-08-01

    Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.

  6. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.

  7. On the Agreement between Manual and Automated Methods for Single-Trial Detection and Estimation of Features from Event-Related Potentials

    PubMed Central

    Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.

    2015-01-01

    The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532

  8. Improved quantitative analysis of spectra using a new method of obtaining derivative spectra based on a singular perturbation technique.

    PubMed

    Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan

    2015-06-01

    Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.

  9. Security Events and Vulnerability Data for Cybersecurity Risk Estimation.

    PubMed

    Allodi, Luca; Massacci, Fabio

    2017-08-01

    Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.

  10. Quantitative Analysis of Radar Returns from Insects

    NASA Technical Reports Server (NTRS)

    Riley, J. R.

    1979-01-01

    When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.

  11. Estimation of the number of fluorescent end-members for quantitative analysis of multispectral FLIM data.

    PubMed

    Gutierrez-Navarro, Omar; Campos-Delgado, Daniel U; Arce-Santana, Edgar R; Maitland, Kristen C; Cheng, Shuna; Jabbour, Joey; Malik, Bilal; Cuenca, Rodrigo; Jo, Javier A

    2014-05-19

    Multispectral fluorescence lifetime imaging (m-FLIM) can potentially allow identifying the endogenous fluorophores present in biological tissue. Quantitative description of such data requires estimating the number of components in the sample, their characteristic fluorescent decays, and their relative contributions or abundances. Unfortunately, this inverse problem usually requires prior knowledge about the data, which is seldom available in biomedical applications. This work presents a new methodology to estimate the number of potential endogenous fluorophores present in biological tissue samples from time-domain m-FLIM data. Furthermore, a completely blind linear unmixing algorithm is proposed. The method was validated using both synthetic and experimental m-FLIM data. The experimental m-FLIM data include in-vivo measurements from healthy and cancerous hamster cheek-pouch epithelial tissue, and ex-vivo measurements from human coronary atherosclerotic plaques. The analysis of m-FLIM data from in-vivo hamster oral mucosa identified healthy from precancerous lesions, based on the relative concentration of their characteristic fluorophores. The algorithm also provided a better description of atherosclerotic plaques in term of their endogenous fluorophores. These results demonstrate the potential of this methodology to provide quantitative description of tissue biochemical composition.

  12. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    PubMed

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  13. Doctor-patient communication: some quantitative estimates of the role of cognitive factors in non-compliance.

    PubMed

    Ley, P

    1985-04-01

    Patients frequently fail to understand what they are told. Further, they frequently forget the information given to them. These factors have effects on patients' satisfaction with the consultation. All three of these factors--understanding, memory and satisfaction--have effects on the probability that a patient will comply with advice. The levels of failure to understand and remember and levels of dissatisfaction are described. Quantitative estimates of the effects of these factors on non-compliance are presented.

  14. Secular trends of infectious disease mortality in The Netherlands, 1911-1978: quantitative estimates of changes coinciding with the introduction of antibiotics.

    PubMed

    Mackenbach, J P; Looman, C W

    1988-09-01

    Secular trends of mortality from 21 infectious diseases in the Netherlands were studied by inspection of age/sex-standardized mortality curves and by log-linear regression analysis. An attempt was made to obtain quantitative estimates for changes coinciding with the introduction of antibiotics. Two possible types of effect were considered: a sharp reduction of mortality at the moment of the introduction of antibiotics, and a longer lasting (acceleration of) mortality decline after the introduction. Changes resembling the first type of effect were possibly present for many infectious diseases, but were difficult to measure exactly, due to late effects on mortality of World War II. Changes resembling the second type of effect were present in 16 infectious diseases and were sometimes quite large. For example, estimated differences in per cent per annum mortality change were 10% or larger for puerperal fever, scarlet fever, rheumatic fever, erysipelas, otitis media, tuberculosis, and bacillary dysentery. No acceleration of mortality decline after the introduction of antibiotics was present in mortality from 'all other diseases'. Although the exact contribution of antibiotics to the observed changes cannot be inferred from this time trend analysis, the quantitative estimates of the changes show that even a partial contribution would represent a substantial effect of antibiotics on mortality from infectious diseases in the Netherlands.

  15. REVIEW OF DRAFT REVISED BLUE BOOK ON ESTIMATING CANCER RISKS FROM EXPOSURE TO IONIZING RADIATION

    EPA Science Inventory

    In 1994, EPA published a report, referred to as the “Blue Book,” which lays out EPA’s current methodology for quantitatively estimating radiogenic cancer risks. A follow-on report made minor adjustments to the previous estimates and presented a partial analysis of the uncertainti...

  16. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  17. Quantitative estimation of film forming polymer-plasticizer interactions by the Lorentz-Lorenz Law.

    PubMed

    Dredán, J; Zelkó, R; Dávid, A Z; Antal, I

    2006-03-09

    Molar refraction as well as refractive index has many uses. Beyond confirming the identity and purity of a compound, determination of molecular structure and molecular weight, molar refraction is also used in other estimation schemes, such as in critical properties, surface tension, solubility parameter, molecular polarizability, dipole moment, etc. In the present study molar refraction values of polymer dispersions were determined for the quantitative estimation of film forming polymer-plasticizer interactions. Information can be obtained concerning the extent of interaction between the polymer and the plasticizer from the calculation of molar refraction values of film forming polymer dispersions containing plasticizer.

  18. The benefits of improved technologies in agricultural aviation

    NASA Technical Reports Server (NTRS)

    Lietzke, K.; Abram, P.; Braen, C.; Givens, S.; Hazelrigg, G. A., Jr.; Fish, R.; Clyne, F.; Sand, F.

    1977-01-01

    The results are present for a study of the economic benefits attributed to a variety of potential technological improvements in agricultural aviation. Part 1 gives a general description of the ag-air industry and discusses the information used in the data base to estimate the potential benefits from technological improvements. Part 2 presents the benefit estimates and provides a quantitative basis for the estimates in each area study. Part 3 is a bibliography of references relating to this study.

  19. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward.  The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226

  20. Reliability and safety, and the risk of construction damage in mining areas

    NASA Astrophysics Data System (ADS)

    Skrzypczak, Izabela; Kogut, Janusz P.; Kokoszka, Wanda; Oleniacz, Grzegorz

    2018-04-01

    This article concerns the reliability and safety of building structures in mining areas, with a particular emphasis on the quantitative risk analysis of buildings. The issues of threat assessment and risk estimation, in the design of facilities in mining exploitation areas, are presented here, indicating the difficulties and ambiguities associated with their quantification and quantitative analysis. This article presents the concept of quantitative risk assessment of the impact of mining exploitation, in accordance with ISO 13824 [1]. The risk analysis is illustrated through an example of a construction located within an area affected by mining exploitation.

  1. Conventional liquid chromatography/triple quadrupole mass spectrometer-based metabolite identification and semi-quantitative estimation approach in the investigation of dabigatran etexilate in vitro metabolism

    PubMed Central

    Hu, Zhe-Yi; Parker, Robert B.; Herring, Vanessa L.; Laizure, S. Casey

    2012-01-01

    Dabigatran etexilate (DABE) is an oral prodrug that is rapidly converted by esterases to dabigatran (DAB), a direct inhibitor of thrombin. To elucidate the esterase-mediated metabolic pathway of DABE, a high-performance liquid chromatography/mass spectrometer (LC-MS/MS)-based metabolite identification and semi-quantitative estimation approach was developed. To overcome the poor full-scan sensitivity of conventional triple quadrupole mass spectrometry, precursor-product ion pairs were predicted, to search for the potential in vitro metabolites. The detected metabolites were confirmed by the product ion scan. A dilution method was introduced to evaluate the matrix effects of tentatively identified metabolites without chemical standards. Quantitative information on detected metabolites was obtained using ‘metabolite standards’ generated from incubation samples that contain a high concentration of metabolite in combination with a correction factor for mass spectrometry response. Two in vitro metabolites of DABE (M1 and M2) were identified, and quantified by the semi-quantitative estimation approach. It is noteworthy that CES1 convert DABE to M1 while CES2 mediates the conversion of DABE to M2. M1 (or M2) was further metabolized to DAB by CES2 (or CES1). The approach presented here provides a solution to a bioanalytical need for fast identification and semi-quantitative estimation of CES metabolites in preclinical samples. PMID:23239178

  2. Quantitative software models for the estimation of cost, size, and defects

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Bright, L.; Decker, B.; Lum, K.; Mikulski, C.; Powell, J.

    2002-01-01

    The presentation will provide a brief overview of the SQI measurement program as well as describe each of these models and how they are currently being used in supporting JPL project, task and software managers to estimate and plan future software systems and subsystems.

  3. Misconceptions of Astronomical Distances

    ERIC Educational Resources Information Center

    Miller, Brian W.; Brewer, William F.

    2010-01-01

    Previous empirical studies using multiple-choice procedures have suggested that there are misconceptions about the scale of astronomical distances. The present study provides a quantitative estimate of the nature of this misconception among US university students by asking them, in an open-ended response format, to make estimates of the distances…

  4. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study.

    PubMed

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-11

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  5. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    PubMed

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  6. A prioritization of generic safety issues. Supplement 19, Revision insertion instructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1995-11-01

    The report presents the safety priority ranking for generic safety issues related to nuclear power plants. The purpose of these rankings is to assist in the timely and efficient allocation of NRC resources for the resolution of those safety issues that have a significant potential for reducing risk. The safety priority rankings are HIGH, MEDIUM, LOW, and DROP, and have been assigned on the basis of risk significance estimates, the ratio of risk to costs and other impacts estimated to result if resolution of the safety issues were implemented, and the consideration of uncertainties and other quantitative or qualitative factors.more » To the extent practical, estimates are quantitative. This document provides revisions and amendments to the report.« less

  7. OHD/HL: Presentations

    Science.gov Websites

    enter or select the go button to submit request City, St Go Science Research and Collaboration Hydrology River Forecasts, January 2002 AMS Short Course on Quantitative Precipitation Estimation and Forecasting

  8. Quantifying the impact of community quarantine on SARS transmission in Ontario: estimation of secondary case count difference and number needed to quarantine.

    PubMed

    Bondy, Susan J; Russell, Margaret L; Laflèche, Julie Ml; Rea, Elizabeth

    2009-12-24

    Community quarantine is controversial, and the decision to use and prepare for it should be informed by specific quantitative evidence of benefit. Case-study reports on 2002-2004 SARS outbreaks have discussed the role of quarantine in the community in transmission. However, this literature has not yielded quantitative estimates of the reduction in secondary cases attributable to quarantine as would be seen in other areas of health policy and cost-effectiveness analysis. Using data from the 2003 Ontario, Canada, SARS outbreak, two novel expressions for the impact of quarantine are presented. Secondary Case Count Difference (SCCD) reflects reduction in the average number of transmissions arising from a SARS case in quarantine, relative to not in quarantine, at onset of symptoms. SCCD was estimated using Poisson and negative binomial regression models (with identity link function) comparing the number of secondary cases to each index case for quarantine relative to non-quarantined index cases. The inverse of this statistic is proposed as the number needed to quarantine (NNQ) to prevent one additional secondary transmission. Our estimated SCCD was 0.133 fewer secondary cases per quarantined versus non-quarantined index case; and a NNQ of 7.5 exposed individuals to be placed in community quarantine to prevent one additional case of transmission in the community. This analysis suggests quarantine can be an effective preventive measure, although these estimates lack statistical precision. Relative to other health policy areas, literature on quarantine tends to lack in quantitative expressions of effectiveness, or agreement on how best to report differences in outcomes attributable to control measure. We hope to further this discussion through presentation of means to calculate and express the impact of population control measures. The study of quarantine effectiveness presents several methodological and statistical challenges. Further research and discussion are needed to understand the costs and benefits of enacting quarantine, and this includes a discussion of how quantitative benefit should be communicated to decision-makers and the public, and evaluated.

  9. Quantifying the impact of community quarantine on SARS transmission in Ontario: estimation of secondary case count difference and number needed to quarantine

    PubMed Central

    2009-01-01

    Background Community quarantine is controversial, and the decision to use and prepare for it should be informed by specific quantitative evidence of benefit. Case-study reports on 2002-2004 SARS outbreaks have discussed the role of quarantine in the community in transmission. However, this literature has not yielded quantitative estimates of the reduction in secondary cases attributable to quarantine as would be seen in other areas of health policy and cost-effectiveness analysis. Methods Using data from the 2003 Ontario, Canada, SARS outbreak, two novel expressions for the impact of quarantine are presented. Secondary Case Count Difference (SCCD) reflects reduction in the average number of transmissions arising from a SARS case in quarantine, relative to not in quarantine, at onset of symptoms. SCCD was estimated using Poisson and negative binomial regression models (with identity link function) comparing the number of secondary cases to each index case for quarantine relative to non-quarantined index cases. The inverse of this statistic is proposed as the number needed to quarantine (NNQ) to prevent one additional secondary transmission. Results Our estimated SCCD was 0.133 fewer secondary cases per quarantined versus non-quarantined index case; and a NNQ of 7.5 exposed individuals to be placed in community quarantine to prevent one additional case of transmission in the community. This analysis suggests quarantine can be an effective preventive measure, although these estimates lack statistical precision. Conclusions Relative to other health policy areas, literature on quarantine tends to lack in quantitative expressions of effectiveness, or agreement on how best to report differences in outcomes attributable to control measure. We hope to further this discussion through presentation of means to calculate and express the impact of population control measures. The study of quarantine effectiveness presents several methodological and statistical challenges. Further research and discussion are needed to understand the costs and benefits of enacting quarantine, and this includes a discussion of how quantitative benefit should be communicated to decision-makers and the public, and evaluated. PMID:20034405

  10. Estimating Conditional Distributions of Scores on an Alternate Form of a Test. Research Report. ETS RR-15-18

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Chen, Haiwen H.

    2015-01-01

    Quantitative information about test score reliability can be presented in terms of the distribution of equated scores on an alternate form of the test for test takers with a given score on the form taken. In this paper, we describe a procedure for estimating that distribution, for any specified score on the test form taken, by estimating the joint…

  11. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    PubMed

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  12. Stroke onset time estimation from multispectral quantitative magnetic resonance imaging in a rat model of focal permanent cerebral ischemia.

    PubMed

    McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A

    2016-08-01

    Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.

  13. Assessment of the risk of introducing foot-and-mouth disease into Panama via a ferry operating between Cartagena, Colombia and Colon, Panama.

    PubMed

    White, W R; Crom, R L; Walker, K D

    1996-07-23

    It should be emphasized that the proposed ferry hazard categorizations do not represent absolute risks for introducing FMD into Panama, but instead provide a systematic method for comparing and estimating risks in the absence of quantitative data. A hazard rating of high may not necessarily represent a high quantitative risk for the introduction of FMD, but is high when compared to other scenarios. A low hazard rating may estimate a low quantitative risk of importing FMD, but economic consequences of a potential outbreak should also be considered. When further data become available, a more complete assessment of the risks of the Crucero Express compared to airplanes, cargo boats, and small boats can be performed. At present, the risk of the Crucero Express is at least as low as the other transport modes described above. Since vehicles are not presently allowed transport from Colombia to Panama, they present no risk to Panama, but with proper cleaning and disinfection procedures, vehicles can be permitted with low risk. However, the Crucero Express can carry 125 vehicles, and thorough cleaning and disinfection of this many cars will require modern and efficient facilities not yet present at either port.

  14. Methods of Measurement the Quality Metrics in a Printing System

    NASA Astrophysics Data System (ADS)

    Varepo, L. G.; Brazhnikov, A. Yu; Nagornova, I. V.; Novoselskaya, O. A.

    2018-04-01

    One of the main criteria for choosing ink as a component of printing system is scumming ability of the ink. The realization of algorithm for estimating the quality metrics in a printing system is shown. The histograms of ink rate of various printing systems are presented. A quantitative estimation of stability of offset inks emulsifiability is given.

  15. Novel Sessile Drop Software for Quantitative Estimation of Slag Foaming in Carbon/Slag Interactions

    NASA Astrophysics Data System (ADS)

    Khanna, Rita; Rahman, Mahfuzur; Leow, Richard; Sahajwalla, Veena

    2007-08-01

    Novel video-processing software has been developed for the sessile drop technique for a rapid and quantitative estimation of slag foaming. The data processing was carried out in two stages: the first stage involved the initial transformation of digital video/audio signals into a format compatible with computing software, and the second stage involved the computation of slag droplet volume and area of contact in a chosen video frame. Experimental results are presented on slag foaming from synthetic graphite/slag system at 1550 °C. This technique can be used for determining the extent and stability of foam as a function of time.

  16. Comparing Bayesian estimates of genetic differentiation of molecular markers and quantitative traits: an application to Pinus sylvestris.

    PubMed

    Waldmann, P; García-Gil, M R; Sillanpää, M J

    2005-06-01

    Comparison of the level of differentiation at neutral molecular markers (estimated as F(ST) or G(ST)) with the level of differentiation at quantitative traits (estimated as Q(ST)) has become a standard tool for inferring that there is differential selection between populations. We estimated Q(ST) of timing of bud set from a latitudinal cline of Pinus sylvestris with a Bayesian hierarchical variance component method utilizing the information on the pre-estimated population structure from neutral molecular markers. Unfortunately, the between-family variances differed substantially between populations that resulted in a bimodal posterior of Q(ST) that could not be compared in any sensible way with the unimodal posterior of the microsatellite F(ST). In order to avoid publishing studies with flawed Q(ST) estimates, we recommend that future studies should present heritability estimates for each trait and population. Moreover, to detect variance heterogeneity in frequentist methods (ANOVA and REML), it is of essential importance to check also that the residuals are normally distributed and do not follow any systematically deviating trends.

  17. Detecting structural heat losses with mobile infrared thermography. Part IV. Estimating quantitative heat loss at Dartmouth College, Hanover, New Hampshire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munis, R.H.; Marshall, S.J.; Bush, M.A.

    1976-09-01

    During the winter of 1973-74 a mobile infrared thermography system was used to survey campus buildings at Dartmouth College, Hanover, New Hampshire. Both qualitative and quantitative data are presented regarding heat flow through a small area of a wall of one brick dormitory building before and after installation of aluminum reflectors between radiators and the wall. These data were used to estimate annual cost savings for 22 buildings of similar construction having aluminum reflectors installed behind 1100 radiators. The data were then compared with the actual savings which were calculated from condensate meter data. The discrepancy between estimated and actualmore » annual cost savings is explained in detail along with all assumptions required for these calculations.« less

  18. The linearized multistage model and the future of quantitative risk assessment.

    PubMed

    Crump, K S

    1996-10-01

    The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.

  19. Spring 1991 Meeting outstanding papers

    NASA Astrophysics Data System (ADS)

    The Atmospheric Sciences Committee has presented Kaye Brubaker and Jichun Shi with Outstanding Student Paper awards for presentations given at the AGU 1991 Spring Meeting, held in Baltimore May 28-31.Brubaker's paper, “Precipitation Recycling Estimated from Atmospheric Data,” presented quantitative estimates of the contribution of locallyevaporated moisture to precipitation over several large continental regions. Recycled precipitation is defined as water that evaporates from the land surface of a specified region and falls again as precipitation within the region. Brubaker applied a control volume analysis based on a model proposed by Budyko.

  20. Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.

    PubMed

    Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K

    2011-01-01

    We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.

  1. Generalized PSF modeling for optimized quantitation in PET imaging.

    PubMed

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.

  2. A statistical framework for protein quantitation in bottom-up MS-based proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas

    2009-08-15

    ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less

  3. qF-SSOP: real-time optical property corrected fluorescence imaging

    PubMed Central

    Valdes, Pablo A.; Angelo, Joseph P.; Choi, Hak Soo; Gioux, Sylvain

    2017-01-01

    Fluorescence imaging is well suited to provide image guidance during resections in oncologic and vascular surgery. However, the distorting effects of tissue optical properties on the emitted fluorescence are poorly compensated for on even the most advanced fluorescence image guidance systems, leading to subjective and inaccurate estimates of tissue fluorophore concentrations. Here we present a novel fluorescence imaging technique that performs real-time (i.e., video rate) optical property corrected fluorescence imaging. We perform full field of view simultaneous imaging of tissue optical properties using Single Snapshot of Optical Properties (SSOP) and fluorescence detection. The estimated optical properties are used to correct the emitted fluorescence with a quantitative fluorescence model to provide quantitative fluorescence-Single Snapshot of Optical Properties (qF-SSOP) images with less than 5% error. The technique is rigorous, fast, and quantitative, enabling ease of integration into the surgical workflow with the potential to improve molecular guidance intraoperatively. PMID:28856038

  4. Final Report for Dynamic Models for Causal Analysis of Panel Data. Models for Change in Quantitative Variables, Part III: Estimation from Panel Data. Part II, Chapter 5.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This document is part of a series of chapters described in SO 011 759. Addressing the problems of studying change and the change process, the report argues that sociologists should study coupled changes in qualitative and quantitative outcomes (e.g., marital status and earnings). The author presents a model for sociological studies of change in…

  5. Apparatus and method for quantitatively evaluating total fissile and total fertile nuclide content in samples

    DOEpatents

    Caldwell, John T.; Kunz, Walter E.; Cates, Michael R.; Franks, Larry A.

    1985-01-01

    Simultaneous photon and neutron interrogation of samples for the quantitative determination of total fissile nuclide and total fertile nuclide material present is made possible by the use of an electron accelerator. Prompt and delayed neutrons produced from resulting induced fissions are counted using a single detection system and allow the resolution of the contributions from each interrogating flux leading in turn to the quantitative determination sought. Detection limits for .sup.239 Pu are estimated to be about 3 mg using prompt fission neutrons and about 6 mg using delayed neutrons.

  6. Apparatus and method for quantitatively evaluating total fissile and total fertile nuclide content in samples. [Patent application

    DOEpatents

    Caldwell, J.T.; Kunz, W.E.; Cates, M.R.; Franks, L.A.

    1982-07-07

    Simultaneous photon and neutron interrogation of samples for the quantitative determination of total fissile nuclide and total fertile nuclide material present is made possible by the use of an electron accelerator. Prompt and delayed neutrons produced from resulting induced fission are counted using a single detection system and allow the resolution of the contributions from each interrogating flux leading in turn to the quantitative determination sought. Detection limits for /sup 239/Pu are estimated to be about 3 mg using prompt fission neutrons and about 6 mg using delayed neutrons.

  7. Growth and yield predictions for upland oak stands. 10 years after initial thinning

    Treesearch

    Martin E. Dale; Martin E. Dale

    1972-01-01

    The purpose of this paper is to furnish part of the needed information, that is, quantitative estimates of growth and yield 10 years after initial thinning of upland oak stands. All estimates are computed from a system of equations. These predictions are presented here in tabular form for convenient visual inspection of growth and yield trends. The tables show growth...

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, J. G.

    While the well-known Voigt and Reuss (VR) bounds, and the Voigt-Reuss-Hill (VRH) elastic constant estimators for random polycrystals are all straightforwardly calculated once the elastic constants of anisotropic crystals are known, the Hashin-Shtrikman (HS) bounds and related self-consistent (SC) estimators for the same constants are, by comparison, more difficult to compute. Recent work has shown how to simplify (to some extent) these harder to compute HS bounds and SC estimators. An overview and analysis of a subsampling of these results is presented here with the main point being to show whether or not this extra work (i.e., in calculating bothmore » the HS bounds and the SC estimates) does provide added value since, in particular, the VRH estimators often do not fall within the HS bounds, while the SC estimators (for good reasons) have always been found to do so. The quantitative differences between the SC and the VRH estimators in the eight cases considered are often quite small however, being on the order of ±1%. These quantitative results hold true even though these polycrystal Voigt-Reuss-Hill estimators more typically (but not always) fall outside the Hashin-Shtrikman bounds, while the self-consistent estimators always fall inside (or on the boundaries of) these same bounds.« less

  9. Inland surface waters

    EPA Science Inventory

    A critical load is a “quantitative estimate of the exposure to one or more pollutants below which significant harmful effects on specified sensitive elements of the environment do not occur according to present knowledge”. Critical loads can be either modeled, or calculated empi...

  10. Indicators of the environmental impacts of transportation : highway, rail, aviation, and maritime transport

    DOT National Transportation Integrated Search

    1996-10-01

    This document presents quantitative national estimates of the magnitude of transportations impacts on the natural environment. It is the most comprehensive compilation of environmental and transportation data to date. This document addresses all p...

  11. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  12. Quantitative diet reconstruction of a Neolithic population using a Bayesian mixing model (FRUITS): The case study of Ostorf (Germany).

    PubMed

    Fernandes, Ricardo; Grootes, Pieter; Nadeau, Marie-Josée; Nehlich, Olaf

    2015-07-14

    The island cemetery site of Ostorf (Germany) consists of individual human graves containing Funnel Beaker ceramics dating to the Early or Middle Neolithic. However, previous isotope and radiocarbon analysis demonstrated that the Ostorf individuals had a diet rich in freshwater fish. The present study was undertaken to quantitatively reconstruct the diet of the Ostorf population and establish if dietary habits are consistent with the traditional characterization of a Neolithic diet. Quantitative diet reconstruction was achieved through a novel approach consisting of the use of the Bayesian mixing model Food Reconstruction Using Isotopic Transferred Signals (FRUITS) to model isotope measurements from multiple dietary proxies (δ 13 C collagen , δ 15 N collagen , δ 13 C bioapatite , δ 34 S methione , 14 C collagen ). The accuracy of model estimates was verified by comparing the agreement between observed and estimated human dietary radiocarbon reservoir effects. Quantitative diet reconstruction estimates confirm that the Ostorf individuals had a high protein intake due to the consumption of fish and terrestrial animal products. However, FRUITS estimates also show that plant foods represented a significant source of calories. Observed and estimated human dietary radiocarbon reservoir effects are in good agreement provided that the aquatic reservoir effect at Lake Ostorf is taken as reference. The Ostorf population apparently adopted elements associated with a Neolithic culture but adapted to available local food resources and implemented a subsistence strategy that involved a large proportion of fish and terrestrial meat consumption. This case study exemplifies the diversity of subsistence strategies followed during the Neolithic. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  13. Fourier phase in Fourier-domain optical coherence tomography.

    PubMed

    Uttam, Shikhar; Liu, Yang

    2015-12-01

    Phase of an electromagnetic wave propagating through a sample-of-interest is well understood in the context of quantitative phase imaging in transmission-mode microscopy. In the past decade, Fourier-domain optical coherence tomography has been used to extend quantitative phase imaging to the reflection-mode. Unlike transmission-mode electromagnetic phase, however, the origin and characteristics of reflection-mode Fourier phase are poorly understood, especially in samples with a slowly varying refractive index. In this paper, the general theory of Fourier phase from first principles is presented, and it is shown that Fourier phase is a joint estimate of subresolution offset and mean spatial frequency of the coherence-gated sample refractive index. It is also shown that both spectral-domain phase microscopy and depth-resolved spatial-domain low-coherence quantitative phase microscopy are special cases of this general theory. Analytical expressions are provided for both, and simulations are presented to explain and support the theoretical results. These results are further used to show how Fourier phase allows the estimation of an axial mean spatial frequency profile of the sample, along with depth-resolved characterization of localized optical density change and sample heterogeneity. Finally, a Fourier phase-based explanation of Doppler optical coherence tomography is also provided.

  14. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  15. QuantFusion: Novel Unified Methodology for Enhanced Coverage and Precision in Quantifying Global Proteomic Changes in Whole Tissues.

    PubMed

    Gunawardena, Harsha P; O'Brien, Jonathon; Wrobel, John A; Xie, Ling; Davies, Sherri R; Li, Shunqiang; Ellis, Matthew J; Qaqish, Bahjat F; Chen, Xian

    2016-02-01

    Single quantitative platforms such as label-based or label-free quantitation (LFQ) present compromises in accuracy, precision, protein sequence coverage, and speed of quantifiable proteomic measurements. To maximize the quantitative precision and the number of quantifiable proteins or the quantifiable coverage of tissue proteomes, we have developed a unified approach, termed QuantFusion, that combines the quantitative ratios of all peptides measured by both LFQ and label-based methodologies. Here, we demonstrate the use of QuantFusion in determining the proteins differentially expressed in a pair of patient-derived tumor xenografts (PDXs) representing two major breast cancer (BC) subtypes, basal and luminal. Label-based in-spectra quantitative peptides derived from amino acid-coded tagging (AACT, also known as SILAC) of a non-malignant mammary cell line were uniformly added to each xenograft with a constant predefined ratio, from which Ratio-of-Ratio estimates were obtained for the label-free peptides paired with AACT peptides in each PDX tumor. A mixed model statistical analysis was used to determine global differential protein expression by combining complementary quantifiable peptide ratios measured by LFQ and Ratio-of-Ratios, respectively. With minimum number of replicates required for obtaining the statistically significant ratios, QuantFusion uses the distinct mechanisms to "rescue" the missing data inherent to both LFQ and label-based quantitation. Combined quantifiable peptide data from both quantitative schemes increased the overall number of peptide level measurements and protein level estimates. In our analysis of the PDX tumor proteomes, QuantFusion increased the number of distinct peptide ratios by 65%, representing differentially expressed proteins between the BC subtypes. This quantifiable coverage improvement, in turn, not only increased the number of measurable protein fold-changes by 8% but also increased the average precision of quantitative estimates by 181% so that some BC subtypically expressed proteins were rescued by QuantFusion. Thus, incorporating data from multiple quantitative approaches while accounting for measurement variability at both the peptide and global protein levels make QuantFusion unique for obtaining increased coverage and quantitative precision for tissue proteomes. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  16. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  17. Computational design of a Diels-Alderase from a thermophilic esterase: the importance of dynamics

    NASA Astrophysics Data System (ADS)

    Linder, Mats; Johansson, Adam Johannes; Olsson, Tjelvar S. G.; Liebeschuetz, John; Brinck, Tore

    2012-09-01

    A novel computational Diels-Alderase design, based on a relatively rare form of carboxylesterase from Geobacillus stearothermophilus, is presented and theoretically evaluated. The structure was found by mining the PDB for a suitable oxyanion hole-containing structure, followed by a combinatorial approach to find suitable substrates and rational mutations. Four lead designs were selected and thoroughly modeled to obtain realistic estimates of substrate binding and prearrangement. Molecular dynamics simulations and DFT calculations were used to optimize and estimate binding affinity and activation energies. A large quantum chemical model was used to capture the salient interactions in the crucial transition state (TS). Our quantitative estimation of kinetic parameters was validated against four experimentally characterized Diels-Alderases with good results. The final designs in this work are predicted to have rate enhancements of ≈103-106 and high predicted proficiencies. This work emphasizes the importance of considering protein dynamics in the design approach, and provides a quantitative estimate of the how the TS stabilization observed in most de novo and redesigned enzymes is decreased compared to a minimal, `ideal' model. The presented design is highly interesting for further optimization and applications since it is based on a thermophilic enzyme ( T opt = 70 °C).

  18. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  19. [Quantitative estimation source of urban atmospheric CO2 by carbon isotope composition].

    PubMed

    Liu, Wei; Wei, Nan-Nan; Wang, Guang-Hua; Yao, Jian; Zeng, You-Shi; Fan, Xue-Bo; Geng, Yan-Hong; Li, Yan

    2012-04-01

    To effectively reduce urban carbon emissions and verify the effectiveness of currently project for urban carbon emission reduction, quantitative estimation sources of urban atmospheric CO2 correctly is necessary. Since little fractionation of carbon isotope exists in the transportation from pollution sources to the receptor, the carbon isotope composition can be used for source apportionment. In the present study, a method was established to quantitatively estimate the source of urban atmospheric CO2 by the carbon isotope composition. Both diurnal and height variations of concentrations of CO2 derived from biomass, vehicle exhaust and coal burning were further determined for atmospheric CO2 in Jiading district of Shanghai. Biomass-derived CO2 accounts for the largest portion of atmospheric CO2. The concentrations of CO2 derived from the coal burning are larger in the night-time (00:00, 04:00 and 20:00) than in the daytime (08:00, 12:00 and 16:00), and increase with the increase of height. Those derived from the vehicle exhaust decrease with the height increase. The diurnal and height variations of sources reflect the emission and transport characteristics of atmospheric CO2 in Jiading district of Shanghai.

  20. Shear-induced aggregation dynamics in a polymer microrod suspension

    NASA Astrophysics Data System (ADS)

    Kumar, Pramukta S.

    A non-Brownian suspension of micron scale rods is found to exhibit reversible shear-driven formation of disordered aggregates resulting in dramatic viscosity enhancement at low shear rates. Aggregate formation is imaged at low magnification using a combined rheometer and fluorescence microscope system. The size and structure of these aggregates are found to depend on shear rate and concentration, with larger aggregates present at lower shear rates and higher concentrations. Quantitative measurements of the early-stage aggregation process are modeled by a collision driven growth of porous structures which show that the aggregate density increases with a shear rate. A Krieger-Dougherty type constitutive relation and steady-state viscosity measurements are used to estimate the intrinsic viscosity of complex structures developed under shear. Higher magnification images are collected and used to validate the aggregate size versus density relationship, as well as to obtain particle flow fields via PIV. The flow fields provide a tantalizing view of fluctuations involved in the aggregation process. Interaction strength is estimated via contact force measurements and JKR theory and found to be extremely strong in comparison to shear forces present in the system, estimated using hydrodynamic arguments. All of the results are then combined to produce a consistent conceptual model of aggregation in the system that features testable consequences. These results represent a direct, quantitative, experimental study of aggregation and viscosity enhancement in rod suspension, and demonstrate a strategy for inferring inaccessible microscopic geometric properties of a dynamic system through the combination of quantitative imaging and rheology.

  1. A Demonstration on Every Exam.

    ERIC Educational Resources Information Center

    Julian, Glenn M.

    1995-01-01

    Argues that inclusion of demonstrations on examinations increases students' ability to observe carefully the physical world around them, translate from observation in terms of models, and make quantitative estimates and physicist-type "back-of-the-envelope" calculations. Presents demonstration ideas covering the following topics:…

  2. VIII. THE PAST, PRESENT, AND FUTURE OF DEVELOPMENTAL METHODOLOGY.

    PubMed

    Little, Todd D; Wang, Eugene W; Gorrall, Britt K

    2017-06-01

    This chapter selectively reviews the evolution of quantitative practices in the field of developmental methodology. The chapter begins with an overview of the past in developmental methodology, discussing the implementation and dissemination of latent variable modeling and, in particular, longitudinal structural equation modeling. It then turns to the present state of developmental methodology, highlighting current methodological advances in the field. Additionally, this section summarizes ample quantitative resources, ranging from key quantitative methods journal articles to the various quantitative methods training programs and institutes. The chapter concludes with the future of developmental methodology and puts forth seven future innovations in the field. The innovations discussed span the topics of measurement, modeling, temporal design, and planned missing data designs. Lastly, the chapter closes with a brief overview of advanced modeling techniques such as continuous time models, state space models, and the application of Bayesian estimation in the field of developmental methodology. © 2017 The Society for Research in Child Development, Inc.

  3. A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas

    2009-08-15

    Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less

  4. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  5. A cost-effectiveness comparison of existing and Landsat-aided snow water content estimation systems

    NASA Technical Reports Server (NTRS)

    Sharp, J. M.; Thomas, R. W.

    1975-01-01

    This study describes how Landsat imagery can be cost-effectively employed to augment an operational hydrologic model. Attention is directed toward the estimation of snow water content, a major predictor variable in the volumetric runoff forecasting model presently used by the California Department of Water Resources. A stratified double sampling scheme is supplemented with qualitative and quantitative analyses of existing operations to develop a comparison between the existing and satellite-aided approaches to snow water content estimation. Results show a decided advantage for the Landsat-aided approach.

  6. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.

  7. Synchronous precipitation reduction in the American Tropics associated with Heinrich 2.

    PubMed

    Medina-Elizalde, Martín; Burns, Stephen J; Polanco-Martinez, Josué; Lases-Hernández, Fernanda; Bradley, Raymond; Wang, Hao-Cheng; Shen, Chuan-Chou

    2017-09-11

    During the last ice age temperature in the North Atlantic oscillated in cycles known as Dansgaard-Oeschger (D-O) events. The magnitude of Caribbean hydroclimate change associated with D-O variability and particularly with stadial intervals, remains poorly constrained by paleoclimate records. We present a 3.3 thousand-year long stalagmite δ 18 O record from the Yucatan Peninsula (YP) that spans the interval between 26.5 and 23.2 thousand years before present. We estimate quantitative precipitation variability and the high resolution and dating accuracy of this record allow us to investigate how rainfall in the region responds to D-O events. Quantitative precipitation estimates are based on observed regional amount effect variability, last glacial paleotemperature records, and estimates of the last glacial oxygen isotopic composition of precipitation based on global circulation models (GCMs). The new precipitation record suggests significant low latitude hydrological responses to internal modes of climate variability and supports a role of Caribbean hydroclimate in helping Atlantic Meridional Overturning Circulation recovery during D-O events. Significant in-phase precipitation reduction across the equator in the tropical Americas associated with Heinrich event 2 is suggested by available speleothem oxygen isotope records.

  8. Modeling of process of forming quality parameters for surfaces of parts by diamond burnishing taking into account technological heredity

    NASA Astrophysics Data System (ADS)

    Nagorkin, M. N.; Fyodorov, V. P.; Kovalyova, E. V.

    2018-03-01

    The paper presents a methodology for quantitative assessment of the influence of technological heredity on the formation of quality parameters for surfaces of machine parts. An example of an estimation of influence factors of technological subsystems of processing by end milling processing by composite 10 and the subsequent diamond burnishing is presented.

  9. A quantitative visual dashboard to explore exposures to ...

    EPA Pesticide Factsheets

    The Exposure Prioritization (Ex Priori) model features a simplified, quantitative visual dashboard to explore exposures across chemical space. Diverse data streams are integrated within the interface such that different exposure scenarios for “individual,” “population,” or “professional” time-use profiles can be interchanged to tailor exposure and quantitatively explore multi-chemical signatures of exposure, internalized dose (uptake), body burden, and elimination. Ex Priori will quantitatively extrapolate single-point estimates of both exposure and internal dose for multiple exposure scenarios, factors, products, and pathways. Currently, EPA is investigating its usefulness in life cycle analysis, insofar as its ability to enhance exposure factors used in calculating characterization factors for human health. Presented at 2016 Annual ISES Meeting held in Utrecht, The Netherlands, from 9-13 October 2016.

  10. Light scattering application for quantitative estimation of apoptosis

    NASA Astrophysics Data System (ADS)

    Bilyy, Rostyslav O.; Stoika, Rostyslav S.; Getman, Vasyl B.; Bilyi, Olexander I.

    2004-05-01

    Estimation of cell proliferation and apoptosis are in focus of instrumental methods used in modern biomedical sciences. Present study concerns monitoring of functional state of cells, specifically the development of their programmed death or apoptosis. The available methods for such purpose are either very expensive, or require time-consuming operations. Their specificity and sensitivity are frequently not sufficient for making conclusions which could be used in diagnostics or treatment monitoring. We propose a novel method for apoptosis measurement based on quantitative determination of cellular functional state taking into account their physical characteristics. This method uses the patented device -- laser microparticle analyser PRM-6 -- for analyzing light scattering by the microparticles, including cells. The method gives an opportunity for quick, quantitative, simple (without complicated preliminary cell processing) and relatively cheap measurement of apoptosis in cellular population. The elaborated method was used for studying apoptosis expression in murine leukemia cells of L1210 line and human lymphoblastic leukemia cells of K562 line. The results obtained by the proposed method permitted measuring cell number in tested sample, detecting and quantitative characterization of functional state of cells, particularly measuring the ratio of the apoptotic cells in suspension.

  11. Quantitative genetic tools for insecticide resistance risk assessment: estimating the heritability of resistance

    Treesearch

    Michael J. Firko; Jane Leslie Hayes

    1990-01-01

    Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...

  12. Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation

    PubMed Central

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-01-01

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994

  13. Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.

    PubMed

    Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman

    2013-10-21

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.

  14. Bone Health Monitoring in Astronauts: Recommended Use of Quantitative Computed Tomography [QCT] for Clinical and Operational Decisions

    NASA Technical Reports Server (NTRS)

    Sibonga, J. D.; Truskowski, P.

    2010-01-01

    This slide presentation reviews the concerns that astronauts in long duration flights might have a greater risk of bone fracture as they age than the general population. A panel of experts was convened to review the information and recommend mechanisms to monitor the health of bones in astronauts. The use of Quantitative Computed Tomography (QCT) scans for risk surveillance to detect the clinical trigger and to inform countermeasure evaluation is reviewed. An added benefit of QCT is that it facilitates an individualized estimation of bone strength by Finite Element Modeling (FEM), that can inform approaches for bone rehabilitation. The use of FEM is reviewed as a process that arrives at a composite number to estimate bone strength, because it integrates multiple factors.

  15. Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation with Brain Iron in Normal Aging

    PubMed Central

    Poynton, Clare; Jenkinson, Mark; Adalsteinsson, Elfar; Sullivan, Edith V.; Pfefferbaum, Adolf; Wells, William

    2015-01-01

    There is increasing evidence that iron deposition occurs in specific regions of the brain in normal aging and neurodegenerative disorders such as Parkinson's, Huntington's, and Alzheimer's disease. Iron deposition changes the magnetic susceptibility of tissue, which alters the MR signal phase, and allows estimation of susceptibility differences using quantitative susceptibility mapping (QSM). We present a method for quantifying susceptibility by inversion of a perturbation model, or ‘QSIP’. The perturbation model relates phase to susceptibility using a kernel calculated in the spatial domain, in contrast to previous Fourier-based techniques. A tissue/air susceptibility atlas is used to estimate B0 inhomogeneity. QSIP estimates in young and elderly subjects are compared to postmortem iron estimates, maps of the Field-Dependent Relaxation Rate Increase (FDRI), and the L1-QSM method. Results for both groups showed excellent agreement with published postmortem data and in-vivo FDRI: statistically significant Spearman correlations ranging from Rho = 0.905 to Rho = 1.00 were obtained. QSIP also showed improvement over FDRI and L1-QSM: reduced variance in susceptibility estimates and statistically significant group differences were detected in striatal and brainstem nuclei, consistent with age-dependent iron accumulation in these regions. PMID:25248179

  16. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  17. Experimental Design for Parameter Estimation of Gene Regulatory Networks

    PubMed Central

    Timmer, Jens

    2012-01-01

    Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723

  18. Applying petrophysical models to radar travel time and electrical resistivity tomograms: Resolution-dependent limitations

    USGS Publications Warehouse

    Day-Lewis, F. D.; Singha, K.; Binley, A.M.

    2005-01-01

    Geophysical imaging has traditionally provided qualitative information about geologic structure; however, there is increasing interest in using petrophysical models to convert tomograms to quantitative estimates of hydrogeologic, mechanical, or geochemical parameters of interest (e.g., permeability, porosity, water content, and salinity). Unfortunately, petrophysical estimation based on tomograms is complicated by limited and variable image resolution, which depends on (1) measurement physics (e.g., electrical conduction or electromagnetic wave propagation), (2) parameterization and regularization, (3) measurement error, and (4) spatial variability. We present a framework to predict how core-scale relations between geophysical properties and hydrologic parameters are altered by the inversion, which produces smoothly varying pixel-scale estimates. We refer to this loss of information as "correlation loss." Our approach upscales the core-scale relation to the pixel scale using the model resolution matrix from the inversion, random field averaging, and spatial statistics of the geophysical property. Synthetic examples evaluate the utility of radar travel time tomography (RTT) and electrical-resistivity tomography (ERT) for estimating water content. This work provides (1) a framework to assess tomograms for geologic parameter estimation and (2) insights into the different patterns of correlation loss for ERT and RTT. Whereas ERT generally performs better near boreholes, RTT performs better in the interwell region. Application of petrophysical models to the tomograms in our examples would yield misleading estimates of water content. Although the examples presented illustrate the problem of correlation loss in the context of near-surface geophysical imaging, our results have clear implications for quantitative analysis of tomograms for diverse geoscience applications. Copyright 2005 by the American Geophysical Union.

  19. Fourier phase in Fourier-domain optical coherence tomography

    PubMed Central

    Uttam, Shikhar; Liu, Yang

    2015-01-01

    Phase of an electromagnetic wave propagating through a sample-of-interest is well understood in the context of quantitative phase imaging in transmission-mode microscopy. In the past decade, Fourier-domain optical coherence tomography has been used to extend quantitative phase imaging to the reflection-mode. Unlike transmission-mode electromagnetic phase, however, the origin and characteristics of reflection-mode Fourier phase are poorly understood, especially in samples with a slowly varying refractive index. In this paper, the general theory of Fourier phase from first principles is presented, and it is shown that Fourier phase is a joint estimate of subresolution offset and mean spatial frequency of the coherence-gated sample refractive index. It is also shown that both spectral-domain phase microscopy and depth-resolved spatial-domain low-coherence quantitative phase microscopy are special cases of this general theory. Analytical expressions are provided for both, and simulations are presented to explain and support the theoretical results. These results are further used to show how Fourier phase allows the estimation of an axial mean spatial frequency profile of the sample, along with depth-resolved characterization of localized optical density change and sample heterogeneity. Finally, a Fourier phase-based explanation of Doppler optical coherence tomography is also provided. PMID:26831383

  20. A quantitative flood risk analysis methodology for urban areas with integration of social research data

    NASA Astrophysics Data System (ADS)

    Escuder-Bueno, I.; Castillo-Rodríguez, J. T.; Zechner, S.; Jöbstl, C.; Perales-Momparler, S.; Petaccia, G.

    2012-09-01

    Risk analysis has become a top priority for authorities and stakeholders in many European countries, with the aim of reducing flooding risk, considering the population's needs and improving risk awareness. Within this context, two methodological pieces have been developed in the period 2009-2011 within the SUFRI project (Sustainable Strategies of Urban Flood Risk Management with non-structural measures to cope with the residual risk, 2nd ERA-Net CRUE Funding Initiative). First, the "SUFRI Methodology for pluvial and river flooding risk assessment in urban areas to inform decision-making" provides a comprehensive and quantitative tool for flood risk analysis. Second, the "Methodology for investigation of risk awareness of the population concerned" presents the basis to estimate current risk from a social perspective and identify tendencies in the way floods are understood by citizens. Outcomes of both methods are integrated in this paper with the aim of informing decision making on non-structural protection measures. The results of two case studies are shown to illustrate practical applications of this developed approach. The main advantage of applying the methodology herein presented consists in providing a quantitative estimation of flooding risk before and after investing in non-structural risk mitigation measures. It can be of great interest for decision makers as it provides rational and solid information.

  1. Quantitating Human Optic Disc Topography

    NASA Astrophysics Data System (ADS)

    Graebel, William P.; Cohan, Bruce E.; Pearch, Andrew C.

    1980-07-01

    A method is presented for quantitatively expressing the topography of the human optic disc, applicable in a clinical setting to the diagnosis and management of glaucoma. Pho-tographs of the disc illuminated by a pattern of fine, high contrast parallel lines are digitized. From the measured deviation of the lines as they traverse the disc surface, disc topography is calculated, using the principles of optical sectioning. The quantitators applied to express this topography have the the following advantages : sensitivity to disc shape; objectivity; going beyond the limits of cup-disc ratio estimates and volume calculations; perfect generality in a mathematical sense; an inherent scheme for determining a non-subjective reference frame to compare different discs or the same disc over time.

  2. Development of quantitative exposure data for a pooled exposure-response analysis of 10 silica cohorts.

    PubMed

    Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa

    2002-08-01

    Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.

  3. Measurement Models for Reasoned Action Theory.

    PubMed

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-03-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach.

  4. Pulsar distances and the galactic distribution of free electrons

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.; Cordes, J. M.

    1993-01-01

    The present quantitative model for Galactic free electron distribution abandons the assumption of axisymmetry and explicitly incorporates spiral arms; their shapes and locations are derived from existing radio and optical observations of H II regions. The Gum Nebula's dispersion-measure contributions are also explicitly modeled. Adjustable quantities are calibrated by reference to three different types of data. The new model is estimated to furnish distance estimates to known pulsars that are accurate to about 25 percent.

  5. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment.

    PubMed

    Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja

    2016-11-01

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.

  6. Precise Estimation of Allele Frequencies of Single-Nucleotide Polymorphisms by a Quantitative SSCP Analysis of Pooled DNA

    PubMed Central

    Sasaki, Tomonari; Tahira, Tomoko; Suzuki, Akari; Higasa, Koichiro; Kukita, Yoji; Baba, Shingo; Hayashi, Kenshi

    2001-01-01

    We show that single-nucleotide polymorphisms (SNPs) of moderate to high heterozygosity (minor allele frequencies >10%) can be efficiently detected, and their allele frequencies accurately estimated, by pooling the DNA samples and applying a capillary-based SSCP analysis. In this method, alleles are separated into peaks, and their frequencies can be reliably and accurately quantified from their peak heights (SD <1.8%). We found that as many as 40% of publicly available SNPs that were analyzed by this method have widely differing allele frequency distributions among groups of different ethnicity (parents of Centre d'Etude Polymorphisme Humaine families vs. Japanese individuals). These results demonstrate the effectiveness of the present pooling method in the reevaluation of candidate SNPs that have been collected by examination of limited numbers of individuals. The method should also serve as a robust quantitative technique for studies in which a precise estimate of SNP allele frequencies is essential—for example, in linkage disequilibrium analysis. PMID:11083945

  7. Apollo Video Photogrammetry Estimation Of Plume Impingement Effects

    NASA Technical Reports Server (NTRS)

    Immer, Christopher; Lane, John; Metzger, Philip T.; Clements, Sandra

    2008-01-01

    The Constellation Project's planned return to the moon requires numerous landings at the same site. Since the top few centimeters are loosely packed regolith, plume impingement from the Lander ejects the granular material at high velocities. Much work is needed to understand the physics of plume impingement during landing in order to protect hardware surrounding the landing sites. While mostly qualitative in nature, the Apollo Lunar Module landing videos can provide a wealth of quantitative information using modem photogrammetry techniques. The authors have used the digitized videos to quantify plume impingement effects of the landing exhaust on the lunar surface. The dust ejection angle from the plume is estimated at 1-3 degrees. The lofted particle density is estimated at 10(exp 8)- 10(exp 13) particles per cubic meter. Additionally, evidence for ejection of large 10-15 cm sized objects and a dependence of ejection angle on thrust are presented. Further work is ongoing to continue quantitative analysis of the landing videos.

  8. Radar QPE for hydrological design: Intensity-Duration-Frequency curves

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2015-04-01

    Intensity-duration-frequency (IDF) curves are widely used in flood risk management since they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. They are estimated analyzing the extreme values of rainfall records, usually basing on raingauge data. This point-based approach raises two issues: first, hydrological design applications generally need IDF information for the entire catchment rather than a point, second, the representativeness of point measurements decreases with the distance from measure location, especially in regions characterized by steep climatological gradients. Weather radar, providing high resolution distributed rainfall estimates over wide areas, has the potential to overcome these issues. Two objections usually restrain this approach: (i) the short length of data records and (ii) the reliability of quantitative precipitation estimation (QPE) of the extremes. This work explores the potential use of weather radar estimates for the identification of IDF curves by means of a long length radar archive and a combined physical- and quantitative- adjustment of radar estimates. Shacham weather radar, located in the eastern Mediterranean area (Tel Aviv, Israel), archives data since 1990 providing rainfall estimates for 23 years over a region characterized by strong climatological gradients. Radar QPE is obtained correcting the effects of pointing errors, ground echoes, beam blockage, attenuation and vertical variations of reflectivity. Quantitative accuracy is then ensured with a range-dependent bias adjustment technique and reliability of radar QPE is assessed by comparison with gauge measurements. IDF curves are derived from the radar data using the annual extremes method and compared with gauge-based curves. Results from 14 study cases will be presented focusing on the effects of record length and QPE accuracy, exploring the potential application of radar IDF curves for ungauged locations and providing insights on the use of radar QPE for hydrological design studies.

  9. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  10. A Novel Method of Combining Blood Oxygenation and Blood Flow Sensitive Magnetic Resonance Imaging Techniques to Measure the Cerebral Blood Flow and Oxygen Metabolism Responses to an Unknown Neural Stimulus

    PubMed Central

    Simon, Aaron B.; Griffeth, Valerie E. M.; Wong, Eric C.; Buxton, Richard B.

    2013-01-01

    Simultaneous implementation of magnetic resonance imaging methods for Arterial Spin Labeling (ASL) and Blood Oxygenation Level Dependent (BOLD) imaging makes it possible to quantitatively measure the changes in cerebral blood flow (CBF) and cerebral oxygen metabolism (CMRO2) that occur in response to neural stimuli. To date, however, the range of neural stimuli amenable to quantitative analysis is limited to those that may be presented in a simple block or event related design such that measurements may be repeated and averaged to improve precision. Here we examined the feasibility of using the relationship between cerebral blood flow and the BOLD signal to improve dynamic estimates of blood flow fluctuations as well as to estimate metabolic-hemodynamic coupling under conditions where a stimulus pattern is unknown. We found that by combining the information contained in simultaneously acquired BOLD and ASL signals through a method we term BOLD Constrained Perfusion (BCP) estimation, we could significantly improve the precision of our estimates of the hemodynamic response to a visual stimulus and, under the conditions of a calibrated BOLD experiment, accurately determine the ratio of the oxygen metabolic response to the hemodynamic response. Importantly we were able to accomplish this without utilizing a priori knowledge of the temporal nature of the neural stimulus, suggesting that BOLD Constrained Perfusion estimation may make it feasible to quantitatively study the cerebral metabolic and hemodynamic responses to more natural stimuli that cannot be easily repeated or averaged. PMID:23382977

  11. Uncertainty analysis in vulnerability estimations for elements at risk- a review of concepts and some examples on landslides

    NASA Astrophysics Data System (ADS)

    Ciurean, R. L.; Glade, T.

    2012-04-01

    Decision under uncertainty is a constant of everyday life and an important component of risk management and governance. Recently, experts have emphasized the importance of quantifying uncertainty in all phases of landslide risk analysis. Due to its multi-dimensional and dynamic nature, (physical) vulnerability is inherently complex and the "degree of loss" estimates imprecise and to some extent even subjective. Uncertainty analysis introduces quantitative modeling approaches that allow for a more explicitly objective output, improving the risk management process as well as enhancing communication between various stakeholders for better risk governance. This study presents a review of concepts for uncertainty analysis in vulnerability of elements at risk to landslides. Different semi-quantitative and quantitative methods are compared based on their feasibility in real-world situations, hazard dependency, process stage in vulnerability assessment (i.e. input data, model, output), and applicability within an integrated landslide hazard and risk framework. The resulted observations will help to identify current gaps and future needs in vulnerability assessment, including estimation of uncertainty propagation, transferability of the methods, development of visualization tools, but also address basic questions like what is uncertainty and how uncertainty can be quantified or treated in a reliable and reproducible way.

  12. Is There a Safe Level of Exposure to a Carcinogen?

    ERIC Educational Resources Information Center

    Hrudey, Steve E.; Krewski, Daniel

    1995-01-01

    Presents an approach to estimating the "safe" levels of low-dose exposure to carcinogens that involves working upward from the smallest conceivable chronic dose instead of extrapolating downward from high exposures. Discusses expert and public opinion and other issues related to quantitative cancer risk assessment. (LZ)

  13. TOXNET: Toxicology Data Network

    MedlinePlus

    ... 4. Supporting Data for Carcinogenicity Expand II.B. Quantitative Estimate of Carcinogenic Risk from Oral Exposure II. ... of Confidence (Carcinogenicity, Oral Exposure) Expand II.C. Quantitative Estimate of Carcinogenic Risk from Inhalation Exposure II. ...

  14. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  15. Monitoring vegetation conditions from LANDSAT for use in range management

    NASA Technical Reports Server (NTRS)

    Haas, R. H.; Deering, D. W.; Rouse, J. W., Jr.; Schell, J. A.

    1975-01-01

    A summary of the LANDSAT Great Plains Corridor projects and the principal results are presented. Emphasis is given to the use of satellite acquired phenological data for range management and agri-business activities. A convenient method of reducing LANDSAT MSS data to provide quantitative estimates of green biomass on rangelands in the Great Plains is explained. Suggestions for the use of this approach for evaluating range feed conditions are presented. A LANDSAT Follow-on project has been initiated which will employ the green biomass estimation method in a quasi-operational monitoring of range readiness and range feed conditions on a regional scale.

  16. Unbiased estimation of oceanic mean rainfall from satellite borne radiometer measurements

    NASA Technical Reports Server (NTRS)

    Mittal, M. C.

    1981-01-01

    The statistical properties of the radar derived rainfall obtained during the GARP Atlantic Tropical Experiment (GATE) are used to derive quantitative estimates of the spatial and temporal sampling errors associated with estimating rainfall from brightness temperature measurements such as would be obtained from a satelliteborne microwave radiometer employing a practical size antenna aperture. A basis for a method of correcting the so called beam filling problem, i.e., for the effect of nonuniformity of rainfall over the radiometer beamwidth is provided. The method presented employs the statistical properties of the observations themselves without need for physical assumptions beyond those associated with the radiative transfer model. The simulation results presented offer a validation of the estimated accuracy that can be achieved and the graphs included permit evaluation of the effect of the antenna resolution on both the temporal and spatial sampling errors.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thangavelu, Pulari U.; Gupta, Vipul; Dixit, Narendra M., E-mail: narendra@chemeng.iisc.ernet.in

    The contest between the host factor APOBEC3G (A3G) and the HIV-1 protein Vif presents an attractive target of intervention. The extent to which the A3G–Vif interaction must be suppressed to tilt the balance in favor of A3G remains unknown. We employed stochastic simulations and mathematical modeling of the within-host dynamics and evolution of HIV-1 to estimate the fraction of progeny virions that must incorporate A3G to render productive infection unsustainable. Using three different approaches, we found consistently that a transition from sustained infection to suppression of productive infection occurred when the latter fraction exceeded ∼0.8. The transition was triggered bymore » A3G-induced hypermutations that led to premature stop codons compromising viral production and was consistent with driving the basic reproductive number, R{sub 0}, below unity. The fraction identified may serve as a quantitative guideline for strategies targeting the A3G–Vif axis. - Highlights: • We perform simulations and mathematical modeling of the role of APOBEC3G in suppressing HIV-1 infection. • In three distinct ways, we estimate that when over 80% of progeny virions carry APOBEC3G, productive HIV-1 infection would be suppressed. • Our estimate of this critical fraction presents quantitative guidelines for strategies targeting the APOBEC3G–Vif axis.« less

  18. Computation of mass-density images from x-ray refraction-angle images.

    PubMed

    Wernick, Miles N; Yang, Yongyi; Mondal, Indrasis; Chapman, Dean; Hasnah, Moumen; Parham, Christopher; Pisano, Etta; Zhong, Zhong

    2006-04-07

    In this paper, we investigate the possibility of computing quantitatively accurate images of mass density variations in soft tissue. This is a challenging task, because density variations in soft tissue, such as the breast, can be very subtle. Beginning from an image of refraction angle created by either diffraction-enhanced imaging (DEI) or multiple-image radiography (MIR), we estimate the mass-density image using a constrained least squares (CLS) method. The CLS algorithm yields accurate density estimates while effectively suppressing noise. Our method improves on an analytical method proposed by Hasnah et al (2005 Med. Phys. 32 549-52), which can produce significant artefacts when even a modest level of noise is present. We present a quantitative evaluation study to determine the accuracy with which mass density can be determined in the presence of noise. Based on computer simulations, we find that the mass-density estimation error can be as low as a few per cent for typical density variations found in the breast. Example images computed from less-noisy real data are also shown to illustrate the feasibility of the technique. We anticipate that density imaging may have application in assessment of water content of cartilage resulting from osteoarthritis, in evaluation of bone density, and in mammographic interpretation.

  19. Estimation of hydrolysis rate constants for carbamates ...

    EPA Pesticide Factsheets

    Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp

  20. Measurement Models for Reasoned Action Theory

    PubMed Central

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-01-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach. PMID:23243315

  1. Empirical Critical Loads of Atmospheric Nitrogen Deposition for Nutrient Enrichment and Acidification of Sensitive US Lakes

    EPA Science Inventory

    A critical load is a “quantitative estimate of the exposure to one or more pollutants below which significant harmful effects on specified sensitive elements of the environment do not occur according to present knowledge”. Critical loads can be either modeled, or calculated empi...

  2. Modeling Dynamic Functional Neuroimaging Data Using Structural Equation Modeling

    ERIC Educational Resources Information Center

    Price, Larry R.; Laird, Angela R.; Fox, Peter T.; Ingham, Roger J.

    2009-01-01

    The aims of this study were to present a method for developing a path analytic network model using data acquired from positron emission tomography. Regions of interest within the human brain were identified through quantitative activation likelihood estimation meta-analysis. Using this information, a "true" or population path model was then…

  3. Effects of Endogenous Formaldehyde in Nasal Tissues on Inhaled Formmaldehyde Dosimetry Predictions in the Rat, Monkey, and Human Nasal Passages

    EPA Science Inventory

    ABSTRACT Formaldehyde, a nasal carcinogen, is also an endogenous compound that is present in all living cells. Due to its high solubility and reactivity, quantitative risk estimates for inhaled formaldehyde rely on internal dose calculations in the upper respiratory tract which ...

  4. Large-scale precipitation estimation using Kalpana-1 IR measurements and its validation using GPCP and GPCC data

    NASA Astrophysics Data System (ADS)

    Prakash, Satya; Mahesh, C.; Gairola, Rakesh M.

    2011-12-01

    Large-scale precipitation estimation is very important for climate science because precipitation is a major component of the earth's water and energy cycles. In the present study, the GOES precipitation index technique has been applied to the Kalpana-1 satellite infrared (IR) images of every three-hourly, i.e., of 0000, 0300, 0600,…., 2100 hours UTC, for rainfall estimation as a preparatory to the INSAT-3D. After the temperatures of all the pixels in a grid are known, they are distributed to generate a three-hourly 24-class histogram of brightness temperatures of IR (10.5-12.5 μm) images for a 1.0° × 1.0° latitude/longitude box. The daily, monthly, and seasonal rainfall have been estimated using these three-hourly rain estimates for the entire south-west monsoon period of 2009 in the present study. To investigate the potential of these rainfall estimates, the validation of monthly and seasonal rainfall estimates has been carried out using the Global Precipitation Climatology Project and Global Precipitation Climatology Centre data. The validation results show that the present technique works very well for the large-scale precipitation estimation qualitatively as well as quantitatively. The results also suggest that the simple IR-based estimation technique can be used to estimate rainfall for tropical areas at a larger temporal scale for climatological applications.

  5. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  6. A quantitative estimate of schema abnormality in socially anxious and non-anxious individuals.

    PubMed

    Wenzel, Amy; Brendle, Jennifer R; Kerr, Patrick L; Purath, Donna; Ferraro, F Richard

    2007-01-01

    Although cognitive theories of anxiety suggest that anxious individuals are characterized by abnormal threat-relevant schemas, few empirical studies have estimated the nature of these cognitive structures using quantitative methods that lend themselves to inferential statistical analysis. In the present study, socially anxious (n = 55) and non-anxious (n = 62) participants completed 3 Q-Sort tasks to assess their knowledge of events that commonly occur in social or evaluative scenarios. Participants either sorted events according to how commonly they personally believe the events occur (i.e. "self" condition), or to how commonly they estimate that most people believe they occur (i.e. "other" condition). Participants' individual Q-Sorts were correlated with mean sorts obtained from a normative sample to obtain an estimate of schema abnormality, with lower correlations representing greater levels of abnormality. Relative to non-anxious participants, socially anxious participants' sorts were less strongly associated with sorts of the normative sample, particularly in the "self" condition, although secondary analyses suggest that some significant results might be explained, in part, by depression and experience with the scenarios. These results provide empirical support for the theoretical notion that threat-relevant self-schemas of anxious individuals are characterized by some degree of abnormality.

  7. Identification and Quantitation of Flavanols and Proanthocyanidins in Foods: How Good are the Datas?

    PubMed Central

    Kelm, Mark A.; Hammerstone, John F.; Schmitz, Harold H.

    2005-01-01

    Evidence suggesting that dietary polyphenols, flavanols, and proanthocyanidins in particular offer significant cardiovascular health benefits is rapidly increasing. Accordingly, reliable and accurate methods are needed to provide qualitative and quantitative food composition data necessary for high quality epidemiological and clinical research. Measurements for flavonoids and proanthocyanidins have employed a range of analytical techniques, with various colorimetric assays still being popular for estimating total polyphenolic content in foods and other biological samples despite advances made with more sophisticated analyses. More crudely, estimations of polyphenol content as well as antioxidant activity are also reported with values relating to radical scavenging activity. High-performance liquid chromatography (HPLC) is the method of choice for quantitative analysis of individual polyphenols such as flavanols and proanthocyanidins. Qualitative information regarding proanthocyanidin structure has been determined by chemical methods such as thiolysis and by HPLC-mass spectrometry (MS) techniques at present. The lack of appropriate standards is the single most important factor that limits the aforementioned analyses. However, with ever expanding research in the arena of flavanols, proanthocyanidins, and health and the importance of their future inclusion in food composition databases, the need for standards becomes more critical. At present, sufficiently well-characterized standard material is available for selective flavanols and proanthocyanidins, and construction of at least a limited food composition database is feasible. PMID:15712597

  8. A quantitative risk assessment for the safety of carcase storage systems for scrapie infected farms.

    PubMed

    Adkin, A; Jones, D L; Eckford, R L; Edwards-Jones, G; Williams, A P

    2014-10-01

    To determine the risk associated with the use of carcase storage vessels on a scrapie infected farm. A stochastic quantitative risk assessment was developed to determine the rate of accumulation and fate of scrapie in a novel low-input storage system. For an example farm infected with classical scrapie, a mean of 10(3·6) Ovine Oral ID50 s was estimated to accumulate annually. Research indicates that the degradation of any prions present may range from insignificant to a magnitude of one or two logs over several months of storage. For infected farms, the likely partitioning of remaining prion into the sludge phase would necessitate the safe operation and removal of resulting materials from these systems. If complete mixing could be assumed, on average, the concentrations of infectivity are estimated to be slightly lower than that measured in placenta from infected sheep at lambing. This is the first quantitative assessment of the scrapie risk associated with fallen stock on farm and provides guidance to policy makers on the safety of one type of storage system and the relative risk when compared to other materials present on an infected farm. © 2014 Crown Copyright. Journal of Applied Microbiology © 2014 Society for Applied Microbiology This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

  9. Reproducibility and Accuracy of Quantitative Myocardial Blood Flow Using 82Rb-PET: Comparison with 13N-Ammonia

    PubMed Central

    Fakhri, Georges El

    2011-01-01

    82Rb cardiac PET allows the assessment of myocardial perfusion using a column generator in clinics that lack a cyclotron. We and others have previously shown that quantitation of myocardial blood flow (MBF) and coronary flow reserve (CFR) is feasible using dynamic 82Rb PET and factor and compartment analyses. The aim of the present work was to determine the intra- and inter-observer variability of MBF estimation using 82Rb PET as well as the reproducibility of our generalized factor + compartment analyses methodology to estimate MBF and assess its accuracy by comparing, in the same subjects, 82Rb estimates of MBF to those obtained using 13N-ammonia. Methods Twenty-two subjects were included in the reproducibility and twenty subjects in the validation study. Patients were injected with 60±5mCi of 82Rb and imaged dynamically for 6 minutes at rest and during dipyridamole stress Left and right ventricular (LV+RV) time-activity curves were estimated by GFADS and used as input to a 2-compartment kinetic analysis that estimates parametric maps of myocardial tissue extraction (K1) and egress (k2), as well as LV+RV contributions (fv,rv). Results Our results show excellent reproducibility of the quantitative dynamic approach itself with coefficients of repeatability of 1.7% for estimation of MBF at rest, 1.4% for MBF at peak stress and 2.8% for CFR estimation. The inter-observer reproducibility between the four observers that participated in this study was also very good with correlation coefficients greater than 0.87 between any two given observers when estimating coronary flow reserve. The reproducibility of MBF in repeated 82Rb studies was good at rest and excellent at peak stress (r2=0.835). Furthermore, the slope of the correlation line was very close to 1 when estimating stress MBF and CFR in repeated 82Rb studies. The correlation between myocardial flow estimates obtained at rest and during peak stress in 82Rb and 13N-ammonia studies was very good at rest (r2=0.843) and stress (r2=0.761). The Bland-Altman plots show no significant presence of proportional error at rest or stress, nor a dependence of the variations on the amplitude of the myocardial blood flow at rest or stress. A small systematic overestimation of 13N-ammonia MBF was observed with 82Rb at rest (0.129 ml/g/min) and the opposite, i.e., underestimation, at stress (0.22 ml/g/min). Conclusions Our results show that absolute quantitation of myocardial bloof flow is reproducible and accurate with 82Rb dynamic cardiac PET as compared to 13N-ammonia. The reproducibility of the quantitation approach itself was very good as well as inter-observer reproducibility. PMID:19525467

  10. Quantitative, spectrally-resolved intraoperative fluorescence imaging

    PubMed Central

    Valdés, Pablo A.; Leblond, Frederic; Jacobs, Valerie L.; Wilson, Brian C.; Paulsen, Keith D.; Roberts, David W.

    2012-01-01

    Intraoperative visual fluorescence imaging (vFI) has emerged as a promising aid to surgical guidance, but does not fully exploit the potential of the fluorescent agents that are currently available. Here, we introduce a quantitative fluorescence imaging (qFI) approach that converts spectrally-resolved data into images of absolute fluorophore concentration pixel-by-pixel across the surgical field of view (FOV). The resulting estimates are linear, accurate, and precise relative to true values, and spectral decomposition of multiple fluorophores is also achieved. Experiments with protoporphyrin IX in a glioma rodent model demonstrate in vivo quantitative and spectrally-resolved fluorescence imaging of infiltrating tumor margins for the first time. Moreover, we present images from human surgery which detect residual tumor not evident with state-of-the-art vFI. The wide-field qFI technique has broad implications for intraoperative surgical guidance because it provides near real-time quantitative assessment of multiple fluorescent biomarkers across the operative field. PMID:23152935

  11. Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.

    PubMed

    Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse

    2018-05-01

    Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.

  12. Smile line assessment comparing quantitative measurement and visual estimation.

    PubMed

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  13. Quantitative analysis of pyroglutamic acid in peptides.

    PubMed

    Suzuki, Y; Motoi, H; Sato, K

    1999-08-01

    A simplified and rapid procedure for the determination of pyroglutamic acid in peptides was developed. The method involves the enzymatic cleavage of an N-terminal pyroglutamate residue using a thermostable pyroglutamate aminopeptidase and isocratic HPLC separation of the resulting enzymatic hydrolysate using a column switching technique. Pyroglutamate aminopeptidase from a thermophilic archaebacteria, Pyrococcus furiosus, cleaves N-terminal pyroglutamic acid residue independent of the molecular weight of the substrate. It cleaves more than 85% of pyroglutamate from peptides whose molecular weight ranges from 362.4 to 4599.4 Da. Thus, a new method is presented that quantitatively estimates N-terminal pyroglutamic acid residue in peptides.

  14. Bayesian aggregation versus majority vote in the characterization of non-specific arm pain based on quantitative needle electromyography

    PubMed Central

    2010-01-01

    Background Methods for the calculation and application of quantitative electromyographic (EMG) statistics for the characterization of EMG data detected from forearm muscles of individuals with and without pain associated with repetitive strain injury are presented. Methods A classification procedure using a multi-stage application of Bayesian inference is presented that characterizes a set of motor unit potentials acquired using needle electromyography. The utility of this technique in characterizing EMG data obtained from both normal individuals and those presenting with symptoms of "non-specific arm pain" is explored and validated. The efficacy of the Bayesian technique is compared with simple voting methods. Results The aggregate Bayesian classifier presented is found to perform with accuracy equivalent to that of majority voting on the test data, with an overall accuracy greater than 0.85. Theoretical foundations of the technique are discussed, and are related to the observations found. Conclusions Aggregation of motor unit potential conditional probability distributions estimated using quantitative electromyographic analysis, may be successfully used to perform electrodiagnostic characterization of "non-specific arm pain." It is expected that these techniques will also be able to be applied to other types of electrodiagnostic data. PMID:20156353

  15. FT-Raman spectral analysis of human urinary stones.

    PubMed

    Selvaraju, R; Raja, A; Thiruppathi, G

    2012-12-01

    FT-Raman spectroscopy is the most useful tool for the purpose of bio-medical diagnostics. In the present study, FT-Raman spectral method is used to investigate the chemical composition of urinary calculi. Urinary calculi multi-components such as calcium oxalate, hydroxyl apatite, struvite and uric acid are studied. FT-Raman spectrum has been recorded in the range of 3500-400 cm(-1). Chemical compounds are identified by Raman spectroscopic technique. The quantitative estimations of calcium oxalate monohydrate (COM) 1463 cm(-1), calcium oxalate dehydrate (COD) 1478 cm(-1), hydroxyl apatite 959 cm(-1), struvite 575 cm(-1), uric acid 1283 cm(-1) and oxammite (ammonium oxalate monohydrate) 2129 cm(-1) are calculated using particular peaks of FT-Raman spectrum. The quantitative estimation of human urinary stones suitable for the single calibration curve was performed. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. An assessment of the risk of foreign animal disease introduction into the United States of America through garbage from Alaskan cruise ships.

    PubMed

    McElvaine, M D; McDowell, R M; Fite, R W; Miller, L

    1993-12-01

    The United States Department of Agriculture, Animal and Plant Health Inspection Service (USDA-APHIS) has been exploring methods of quantitative risk assessment to support decision-making, provide risk management options and identify research needs. With current changes in world trade, regulatory decisions must have a scientific basis which is transparent, consistent, documentable and defensible. These quantitative risk assessment methods are described in an accompanying paper in this issue. In the present article, the authors provide an illustration by presenting an application of these methods. Prior to proposing changes in regulations, USDA officials requested an assessment of the risk of introduction of foreign animal disease to the United States of America through garbage from Alaskan cruise ships. The risk assessment team used a combination of quantitative and qualitative methods to evaluate this question. Quantitative risk assessment methods were used to estimate the amount of materials of foreign origin being sent to Alaskan landfills. This application of quantitative risk assessment illustrates the flexibility of the methods in addressing specific questions. By applying these methods, specific areas were identified where more scientific information and research were needed. Even with limited information, the risk assessment provided APHIS management with a scientific basis for a regulatory decision.

  17. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  18. Application of real-time PCR for total airborne bacterial assessment: Comparison with epifluorescence microscopy and culture-dependent methods

    NASA Astrophysics Data System (ADS)

    Rinsoz, Thomas; Duquenne, Philippe; Greff-Mirguet, Guylaine; Oppliger, Anne

    Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count non-culturable or non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescence microscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the "impaction on nutrient agar" method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria.

  19. Estimation of two-dimensional motion velocity using ultrasonic signals beamformed in Cartesian coordinate for measurement of cardiac dynamics

    NASA Astrophysics Data System (ADS)

    Kaburaki, Kaori; Mozumi, Michiya; Hasegawa, Hideyuki

    2018-07-01

    Methods for the estimation of two-dimensional (2D) velocity and displacement of physiological tissues are necessary for quantitative diagnosis. In echocardiography with a phased array probe, the accuracy in the estimation of the lateral motion is lower than that of the axial motion. To improve the accuracy in the estimation of the lateral motion, in the present study, the coordinate system for ultrasonic beamforming was changed from the conventional polar coordinate to the Cartesian coordinate. In a basic experiment, the motion velocity of a phantom, which was moved at a constant speed, was estimated by the conventional and proposed methods. The proposed method reduced the bias error and standard deviation in the estimated motion velocities. In an in vivo measurement, intracardiac blood flow was analyzed by the proposed method.

  20. Predicting Loss-of-Control Boundaries Toward a Piloting Aid

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.

  1. Instream flow assessment and economic valuation: a survey of nonmarket benefits research

    USGS Publications Warehouse

    Douglas, Aaron J.; Johnson, Richard L.

    1993-01-01

    Instream flow benefits for United States streams and rivers have recently been investigated by a number of resource economists. These valuation efforts differ in scope, method, and quantitative results. An assessment and review of these valuation efforts is presented. The various sources of differences in non‐market values produced by these studies are explored in some detail. The considerable difficulty of producing estimates of instream flow benefits values that consider all of the pertinent policy and technical issues is delineated in various policy contexts. Evidence is presented that indicates that the considerable policy impact of recent research on this topic is justified despite considerable variation in the magnitude of the estimates.

  2. 75 FR 35990 - Endangered and Threatened Wildlife and Plants; Listing the Flying Earwig Hawaiian Damselfly and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-24

    ... this location in 2008. No quantitative estimate of the size of this remaining population is available... observed in 1998. No quantitative estimates of the size of the extant populations are available. Howarth...

  3. A General Model for Estimating Macroevolutionary Landscapes.

    PubMed

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  4. Quantitative estimation of Nipah virus replication kinetics in vitro

    PubMed Central

    Chang, Li-Yen; Ali, AR Mohd; Hassan, Sharifah Syed; AbuBakar, Sazaly

    2006-01-01

    Background Nipah virus is a zoonotic virus isolated from an outbreak in Malaysia in 1998. The virus causes infections in humans, pigs, and several other domestic animals. It has also been isolated from fruit bats. The pathogenesis of Nipah virus infection is still not well described. In the present study, Nipah virus replication kinetics were estimated from infection of African green monkey kidney cells (Vero) using the one-step SYBR® Green I-based quantitative real-time reverse transcriptase-polymerase chain reaction (qRT-PCR) assay. Results The qRT-PCR had a dynamic range of at least seven orders of magnitude and can detect Nipah virus from as low as one PFU/μL. Following initiation of infection, it was estimated that Nipah virus RNA doubles at every ~40 minutes and attained peak intracellular virus RNA level of ~8.4 log PFU/μL at about 32 hours post-infection (PI). Significant extracellular Nipah virus RNA release occurred only after 8 hours PI and the level peaked at ~7.9 log PFU/μL at 64 hours PI. The estimated rate of Nipah virus RNA released into the cell culture medium was ~0.07 log PFU/μL per hour and less than 10% of the released Nipah virus RNA was infectious. Conclusion The SYBR® Green I-based qRT-PCR assay enabled quantitative assessment of Nipah virus RNA synthesis in Vero cells. A low rate of Nipah virus extracellular RNA release and low infectious virus yield together with extensive syncytial formation during the infection support a cell-to-cell spread mechanism for Nipah virus infection. PMID:16784519

  5. EvolQG - An R package for evolutionary quantitative genetics

    PubMed Central

    Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel

    2016-01-01

    We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352

  6. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    PubMed

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  7. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    PubMed

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.

  8. Lewis and Clark National Historical Park Elk Monitoring Program Annual Report 2010

    USGS Publications Warehouse

    Cole, Carla; Griffin, Paul; Jenkins, Kurt

    2012-01-01

    Data from FY09, FY10, and FY11 will be useful in the formal analyses of trend. Those three years of data will contribute to the preparation of a four-year analysis and report after only one more year. Quantitative estimates of relative use by elk throughout the Fort Clatsop unit will be provided in the four-year report in 2012. Those estimates will account for detection bias, which comes from an incomplete count of elk pellets that were present in the subplots at the time of survey.

  9. The influence of incubation time on adenovirus quantitation in A549 cells by most probable number

    EPA Science Inventory

    Cell culture based assays used to detect waterborne viruses typically call for incubating the sample for at least two weeks in order to ensure that all the culturable virus present is detected. Historically, this estimate was based, at least in part, on the length of time used fo...

  10. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  11. Quantitative myocardial perfusion from static cardiac and dynamic arterial CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.

    2018-05-01

    Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.

  12. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  13. A test for selection employing quantitative trait locus and mutation accumulation data.

    PubMed

    Rice, Daniel P; Townsend, Jeffrey P

    2012-04-01

    Evolutionary biologists attribute much of the phenotypic diversity observed in nature to the action of natural selection. However, for many phenotypic traits, especially quantitative phenotypic traits, it has been challenging to test for the historical action of selection. An important challenge for biologists studying quantitative traits, therefore, is to distinguish between traits that have evolved under the influence of strong selection and those that have evolved neutrally. Most existing tests for selection employ molecular data, but selection also leaves a mark on the genetic architecture underlying a trait. In particular, the distribution of quantitative trait locus (QTL) effect sizes and the distribution of mutational effects together provide information regarding the history of selection. Despite the increasing availability of QTL and mutation accumulation data, such data have not yet been effectively exploited for this purpose. We present a model of the evolution of QTL and employ it to formulate a test for historical selection. To provide a baseline for neutral evolution of the trait, we estimate the distribution of mutational effects from mutation accumulation experiments. We then apply a maximum-likelihood-based method of inference to estimate the range of selection strengths under which such a distribution of mutations could generate the observed QTL. Our test thus represents the first integration of population genetic theory and QTL data to measure the historical influence of selection.

  14. Estimating exposures in the asphalt industry for an international epidemiological cohort study of cancer risk.

    PubMed

    Burstyn, Igor; Boffetta, Paolo; Kauppinen, Timo; Heikkilä, Pirjo; Svane, Ole; Partanen, Timo; Stücker, Isabelle; Frentzel-Beyme, Rainer; Ahrens, Wolfgang; Merzenich, Hiltrud; Heederik, Dick; Hooiveld, Mariëtte; Langård, Sverre; Randem, Britt G; Järvholm, Bengt; Bergdahl, Ingvar; Shaham, Judith; Ribak, Joseph; Kromhout, Hans

    2003-01-01

    An exposure matrix (EM) for known and suspected carcinogens was required for a multicenter international cohort study of cancer risk and bitumen among asphalt workers. Production characteristics in companies enrolled in the study were ascertained through use of a company questionnaire (CQ). Exposures to coal tar, bitumen fume, organic vapor, polycyclic aromatic hydrocarbons, diesel fume, silica, and asbestos were assessed semi-quantitatively using information from CQs, expert judgment, and statistical models. Exposures of road paving workers to bitumen fume, organic vapor, and benzo(a)pyrene were estimated quantitatively by applying regression models, based on monitoring data, to exposure scenarios identified by the CQs. Exposures estimates were derived for 217 companies enrolled in the cohort, plus the Swedish asphalt paving industry in general. Most companies were engaged in road paving and asphalt mixing, but some also participated in general construction and roofing. Coal tar use was most common in Denmark and The Netherlands, but the practice is now obsolete. Quantitative estimates of exposure to bitumen fume, organic vapor, and benzo(a)pyrene for pavers, and semi-quantitative estimates of exposure to these agents among all subjects were strongly correlated. Semi-quantitative estimates of exposure to bitumen fume and coal tar exposures were only moderately correlated. EM assessed non-monotonic historical decrease in exposures to all agents assessed except silica and diesel exhaust. We produced a data-driven EM using methodology that can be adapted for other multicenter studies. Copyright 2003 Wiley-Liss, Inc.

  15. A quantitative approach to assessing the efficacy of occupant protection programs: A case study from Montana.

    PubMed

    Manlove, Kezia; Stanley, Laura; Peck, Alyssa

    2015-10-01

    Quantitative evaluation of vehicle occupant protection programs is critical for ensuring efficient government resource allocation, but few methods exist for conducting evaluation across multiple programs simultaneously. Here we present an analysis of occupant protection efficacy in the state of Montana. This approach relies on seat belt compliance rates as measured by the National Occupant Protection Usage Survey (NOPUS). A hierarchical logistic regression model is used to estimate the impacts of four Montana Department of Transportation (MDT)-funded occupant protection programs used in the state of Montana, following adjustment for a suite of potential confounders. Activity from two programs, Buckle Up coalitions and media campaigns, are associated with increased seat belt use in Montana, whereas the impact of another program, Selective Traffic Enforcement, is potentially masked by other program activity. A final program, Driver's Education, is not associated with any shift in seat belt use. This method allows for a preliminary quantitative estimation of program impacts without requiring states to obtain any new seat belt use data. This approach provides states a preliminary look at program impacts, and a means for carefully planning future program allocation and investigation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Update to An Inventory of Sources and Environmental ...

    EPA Pesticide Factsheets

    In 2006, EPA published an inventory of sources and environmental releases of dioxin-like compounds in the United States. This draft report presents an update and revision to that dioxin source inventory. It also presents updated estimates of environmental releases of dioxin-like compounds to the air, water, land and products. The sources are grouped into five broad categories: combustion sources, metals smelting/refining, chemical manufacturing, natural sources, and environmental reservoirs. Estimates of annual releases to land, air, and water are presented for reference years 1987, 1995, and 2000. While the overall decreasing trend in emissions seen in the original report continues, the individual dioxin releases in this draft updated report are generally higher than the values reported in 2006. This is largely due to the inclusion (in all three years) of additional sources in the quantitative inventory that were not included in the 2006 report. The largest new source included in this draft updated inventory was forest fires. In the 2006 report, this was classified as preliminary and not included in the quantitative inventory. The top three air sources of dioxin emissions in 2000 were forest fires, backyard burning of trash, and medical waste incinerators. The Report Presents An Update To The Dioxin Source Inventory Published In 2006 (U.S. Epa, 2006). The Peer-Review Panel For The 2006 Document Provided Additional Comments After The Final Report Had

  17. Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions

    EPA Science Inventory

    Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...

  18. A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY

    EPA Science Inventory

    We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...

  19. Statistical, economic and other tools for assessing natural aggregate

    USGS Publications Warehouse

    Bliss, J.D.; Moyle, P.R.; Bolm, K.S.

    2003-01-01

    Quantitative aggregate resource assessment provides resource estimates useful for explorationists, land managers and those who make decisions about land allocation, which may have long-term implications concerning cost and the availability of aggregate resources. Aggregate assessment needs to be systematic and consistent, yet flexible enough to allow updating without invalidating other parts of the assessment. Evaluators need to use standard or consistent aggregate classification and statistic distributions or, in other words, models with geological, geotechnical and economic variables or interrelationships between these variables. These models can be used with subjective estimates, if needed, to estimate how much aggregate may be present in a region or country using distributions generated by Monte Carlo computer simulations.

  20. Linear solvation energy relationships: "rule of thumb" for estimation of variable values

    USGS Publications Warehouse

    Hickey, James P.; Passino-Reader, Dora R.

    1991-01-01

    For the linear solvation energy relationship (LSER), values are listed for each of the variables (Vi/100, π*, &betam, αm) for fundamental organic structures and functional groups. We give the guidelines to estimate LSER variable values quickly for a vast array of possible organic compounds such as those found in the environment. The difficulty in generating these variables has greatly discouraged the application of this quantitative structure-activity relationship (QSAR) method. This paper present the first compilation of molecular functional group values together with a utilitarian set of the LSER variable estimation rules. The availability of these variable values and rules should facilitate widespread application of LSER for hazard evaluation of environmental contaminants.

  1. Investment appraisal using quantitative risk analysis.

    PubMed

    Johansson, Henrik

    2002-07-01

    Investment appraisal concerned with investments in fire safety systems is discussed. Particular attention is directed at evaluating, in terms of the Bayesian decision theory, the risk reduction that investment in a fire safety system involves. It is shown how the monetary value of the change from a building design without any specific fire protection system to one including such a system can be estimated by use of quantitative risk analysis, the results of which are expressed in terms of a Risk-adjusted net present value. This represents the intrinsic monetary value of investing in the fire safety system. The method suggested is exemplified by a case study performed in an Avesta Sheffield factory.

  2. A quantitative model for transforming reflectance spectra into the Munsell color space using cone sensitivity functions and opponent process weights.

    PubMed

    D'Andrade, Roy G; Romney, A Kimball

    2003-05-13

    This article presents a computational model of the process through which the human visual system transforms reflectance spectra into perceptions of color. Using physical reflectance spectra data and standard human cone sensitivity functions we describe the transformations necessary for predicting the location of colors in the Munsell color space. These transformations include quantitative estimates of the opponent process weights needed to transform cone activations into Munsell color space coordinates. Using these opponent process weights, the Munsell position of specific colors can be predicted from their physical spectra with a mean correlation of 0.989.

  3. Good practices for quantitative bias analysis.

    PubMed

    Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander

    2014-12-01

    Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage more widespread use of bias analysis to estimate the potential magnitude and direction of biases, as well as the uncertainty in estimates potentially influenced by the biases. © The Author 2014; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  4. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    NASA Astrophysics Data System (ADS)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  5. APPLICATION OF RADIOISOTOPES TO THE QUANTITATIVE CHROMATOGRAPHY OF FATTY ACIDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budzynski, A.Z.; Zubrzycki, Z.J.; Campbell, I.G.

    1959-10-31

    The paper reports work done on the use of I/sup 131/, Zn/sup 65/, Sr/sup 90/, Zr/sup 95/, Ce/sup 144/ for the quantitative estimation of fatty acids on paper chromatograms, and for determination of the degree of usaturation of components of resolved fatty acid mixtures. I/sup 131/ is used to iodinate unsaturated fatty acids, and the amount of such acids is determined from the radiochromatogram. The degree of unsaturation of fatty acids is determined by estimation of the specific activiiy of spots. The other isotopes have been examined from the point of view of their suitability for estimation of total amountsmore » of fatty acids by formation of insoluble radioactive soaps held on the chromatogram. In particular, work is reported on the quantitative estimation of saturated fatty acids by measurement of the activity of their insoluble soaps with radioactive metals. Various quantitative relationships are described between amount of fatty acid in spot and such parameters as radiometrically estimated spot length, width, maximum intensity, and integrated spot activity. A convenient detection apparatus for taking radiochromatograms is also described. In conjunction with conventional chromatographic methods for resolving fatty acids the method permits the estimation of composition of fatty acid mixtures obtained from biological material. (auth)« less

  6. Reconstruction of Absorbed Doses to Fibroglandular Tissue of the Breast of Women undergoing Mammography (1960 to the Present)

    PubMed Central

    Thierry-Chef, Isabelle; Simon, Steven L.; Weinstock, Robert M.; Kwon, Deukwoo; Linet, Martha S.

    2013-01-01

    The assessment of potential benefits versus harms from mammographic examinations as described in the controversial breast cancer screening recommendations of the U.S. Preventive Task Force included limited consideration of absorbed dose to the fibroglandular tissue of the breast (glandular tissue dose), the tissue at risk for breast cancer. Epidemiological studies on cancer risks associated with diagnostic radiological examinations often lack accurate information on glandular tissue dose, and there is a clear need for better estimates of these doses. Our objective was to develop a quantitative summary of glandular tissue doses from mammography by considering sources of variation over time in key parameters including imaging protocols, x-ray target materials, voltage, filtration, incident air kerma, compressed breast thickness, and breast composition. We estimated the minimum, maximum, and mean values for glandular tissue dose for populations of exposed women within 5-year periods from 1960 to the present, with the minimum to maximum range likely including 90% to 95% of the entirety of the dose range from mammography in North America and Europe. Glandular tissue dose from a single view in mammography is presently about 2 mGy, about one-sixth the dose in the 1960s. The ratio of our estimates of maximum to minimum glandular tissue doses for average-size breasts was about 100 in the 1960s compared to a ratio of about 5 in recent years. Findings from our analysis provide quantitative information on glandular tissue doses from mammographic examinations which can be used in epidemiologic studies of breast cancer. PMID:21988547

  7. A quantitative method for estimating dermal benzene absorption from benzene-containing hydrocarbon liquids.

    PubMed

    Petty, Stephen E; Nicas, Mark; Boiarski, Anthony A

    2011-01-01

    This study examines a method for estimating the dermal absorption of benzene contained in hydrocarbon liquids that contact the skin. This method applies to crude oil, gasoline, organic solvents, penetrants, and oils. The flux of benzene through occluded skin as a function of the percent vol/vol benzene in the liquid is derived by fitting a curve to experimental data; the function is supralinear at benzene concentrations < or = 5% vol/vol. When a liquid other than pure benzene is on nonoccluded skin, benzene may preferentially evaporate from the liquid, which thereby decreases the benzene flux. We present a time-averaging method here for estimating the reduced dermal flux during evaporation. Example calculations are presented for benzene at 2% vol/vol in gasoline, and for benzene at 0.1% vol/vol in a less volatile liquid. We also discuss other factors affecting dermal absorption.

  8. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  9. Quantitative phase and amplitude imaging using Differential-Interference Contrast (DIC) microscopy

    NASA Astrophysics Data System (ADS)

    Preza, Chrysanthe; O'Sullivan, Joseph A.

    2009-02-01

    We present an extension of the development of an alternating minimization (AM) method for the computation of a specimen's complex transmittance function (magnitude and phase) from DIC images. The ability to extract both quantitative phase and amplitude information from two rotationally-diverse DIC images (i.e., acquired by rotating the sample) extends previous efforts in computational DIC microscopy that have focused on quantitative phase imaging only. Simulation results show that the inverse problem at hand is sensitive to noise as well as to the choice of the AM algorithm parameters. The AM framework allows constraints and penalties on the magnitude and phase estimates to be incorporated in a principled manner. Towards this end, Green and De Pierro's "log-cosh" regularization penalty is applied to the magnitude of differences of neighboring values of the complex-valued function of the specimen during the AM iterations. The penalty is shown to be convex in the complex space. A procedure to approximate the penalty within the iterations is presented. In addition, a methodology to pre-compute AM parameters that are optimal with respect to the convergence rate of the AM algorithm is also presented. Both extensions of the AM method are investigated with simulations.

  10. Development of a quantitative real-time PCR assay for detection and enumeration of methanogenic archaea in stored swine manure

    USDA-ARS?s Scientific Manuscript database

    Storage of swine manure is associated with the microbial production of a variety of odors and emissions which result from anaerobic digestion of materials present in the manure. In the United States, methane emissions from lagoons and manure storage pits are estimated to be over 40 Tg/year, account...

  11. Preliminary ride-quality evaluation of the HM.2 Hoverferry

    NASA Technical Reports Server (NTRS)

    Mcclurken, E. W., Jr.; Jacobson, I. D.; Kuhlthau, A. R.

    1974-01-01

    The results of a forty-minute exposure of the HM.2 Hoverferry are presented. Quantitative evaluations were made from aft seats on the starboard side for a sea state considered calm and visually estimated at one-half to one foot. Since this type of craft is sensitive to sea state, the conclusions are based on ideal conditions. Some drawings are included.

  12. Standardisation of Gymnema sylvestre R.Br. by high-performance thin-layer chromatography: an improved method.

    PubMed

    Raju, Valivarthi S R; Kannababu, S; Subbaraju, Gottumukkala V

    2006-01-01

    An improved high-performance thin-layer chromatographic (HPTLC) method for the standardisation of Gymnema sylvestre is reported. The method involves the initial hydrolysis of gymnemic acids, the active ingredients, to a common aglycone followed by the quantitative estimation of gymnemagenin. The present method rectifies an error found in an HPTLC method reported recently.

  13. The application of remote sensing to the development and formulation of hydrologic planning models: Executive summary

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.

    1977-01-01

    Methods for the reduction of remotely sensed data and its application in hydrologic land use assessment, surface water inventory, and soil property studies are presented. LANDSAT data is used to provide quantitative parameters and coefficients to construct watershed transfer functions for a hydrologic planning model aimed at estimating peak outflow from rainfall inputs.

  14. HPLC analysis and standardization of Brahmi vati - An Ayurvedic poly-herbal formulation.

    PubMed

    Mishra, Amrita; Mishra, Arun K; Tiwari, Om Prakash; Jha, Shivesh

    2013-09-01

    The aim of the present study was to standardize Brahmi vati (BV) by simultaneous quantitative estimation of Bacoside A3 and Piperine adopting HPLC-UV method. BV very important Ayurvedic polyherbo formulation used to treat epilepsy and mental disorders containing thirty eight ingredients including Bacopa monnieri L. and Piper longum L. An HPLC-UV method was developed for the standardization of BV in light of simultaneous quantitative estimation of Bacoside A3 and Piperine, the major constituents of B. monnieri L. and P. longum L. respectively. The developed method was validated on parameters including linearity, precision, accuracy and robustness. The HPLC analysis showed significant increase in amount of Bacoside A3 and Piperine in the in-house sample of BV when compared with all three different marketed samples of the same. Results showed variations in the amount of Bacoside A3 and Piperine in different samples which indicate non-uniformity in their quality which will lead to difference in their therapeutic effects. The outcome of the present investigation underlines the importance of standardization of Ayurvedic formulations. The developed method may be further used to standardize other samples of BV or other formulations containing Bacoside A3 and Piperine.

  15. Estimation of trace amounts of benzene in solvent-extracted vegetable oils and oil seed cakes.

    PubMed

    Masohan, A; Parsad, G; Khanna, M K; Chopra, S K; Rawat, B S; Garg, M O

    2000-09-01

    A new method is presented for the qualitative and quantitative estimation of trace amounts (up to 0.15 ppm) of benzene in crude as well as refined vegetable oils obtained by extraction with food grade hexane (FGH), and in the oil seed cakes left after extraction. The method involves the selection of two solvents; cyclohexanol, for thinning of viscous vegetable oil, and heptane, for azeotroping out trace benzene as a concentrate from the resulting mixture. Benzene is then estimated in the resulting azeotrope either by UV spectroscopy or by GC-MS subject to availability and cost effectiveness of the latter. Repeatability and reproducibility of the method is within 1-3% error. This method is suitable for estimating benzene in vegetable oils and oil seed cakes.

  16. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  17. On the Need for Quantitative Bias Analysis in the Peer-Review Process.

    PubMed

    Fox, Matthew P; Lash, Timothy L

    2017-05-15

    Peer review is central to the process through which epidemiologists generate evidence to inform public health and medical interventions. Reviewers thereby act as critical gatekeepers to high-quality research. They are asked to carefully consider the validity of the proposed work or research findings by paying careful attention to the methodology and critiquing the importance of the insight gained. However, although many have noted problems with the peer-review system for both manuscripts and grant submissions, few solutions have been proposed to improve the process. Quantitative bias analysis encompasses all methods used to quantify the impact of systematic error on estimates of effect in epidemiologic research. Reviewers who insist that quantitative bias analysis be incorporated into the design, conduct, presentation, and interpretation of epidemiologic research could substantially strengthen the process. In the present commentary, we demonstrate how quantitative bias analysis can be used by investigators and authors, reviewers, funding agencies, and editors. By utilizing quantitative bias analysis in the peer-review process, editors can potentially avoid unnecessary rejections, identify key areas for improvement, and improve discussion sections by shifting from speculation on the impact of sources of error to quantification of the impact those sources of bias may have had. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Quantitative endoscopy: initial accuracy measurements.

    PubMed

    Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P

    2000-02-01

    The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.

  19. The estimation of pointing angle and normalized surface scattering cross section from GEOS-3 radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Brown, G. S.; Curry, W. J.

    1977-01-01

    The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.

  20. Low rank magnetic resonance fingerprinting.

    PubMed

    Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C

    2016-08-01

    Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.

  1. Probing myocardium biomechanics using quantitative optical coherence elastography

    NASA Astrophysics Data System (ADS)

    Wang, Shang; Lopez, Andrew L.; Morikawa, Yuka; Tao, Ge; Li, Jiasong; Larina, Irina V.; Martin, James F.; Larin, Kirill V.

    2015-03-01

    We present a quantitative optical coherence elastographic method for noncontact assessment of the myocardium elasticity. The method is based on shear wave imaging optical coherence tomography (SWI-OCT), where a focused air-puff system is used to induce localized tissue deformation through a low-pressure short-duration air stream and a phase-sensitive OCT system is utilized to monitor the propagation of the induced tissue displacement with nanoscale sensitivity. The 1-D scanning of M-mode OCT imaging and the application of optical phase retrieval and mapping techniques enable the reconstruction and visualization of 2-D depth-resolved shear wave propagation in tissue with ultra-high frame rate. The feasibility of this method in quantitative elasticity measurement is demonstrated on tissue-mimicking phantoms with the estimated Young's modulus compared with uniaxial compression tests. We also performed pilot experiments on ex vivo mouse cardiac muscle tissues with normal and genetically altered cardiomyocytes. Our results indicate this noncontact quantitative optical coherence elastographic method can be a useful tool for the cardiac muscle research and studies.

  2. Estimating Hydrologic Fluxes, Crop Water Use, and Agricultural Land Area in China using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Smith, Tiziana; McLaughlin, Dennis B.; Hoisungwan, Piyatida

    2016-04-01

    Crop production has significantly altered the terrestrial environment by changing land use and by altering the water cycle through both co-opted rainfall and surface water withdrawals. As the world's population continues to grow and individual diets become more resource-intensive, the demand for food - and the land and water necessary to produce it - will continue to increase. High-resolution quantitative data about water availability, water use, and agricultural land use are needed to develop sustainable water and agricultural planning and policies. However, existing data covering large areas with high resolution are susceptible to errors and can be physically inconsistent. China is an example of a large area where food demand is expected to increase and a lack of data clouds the resource management dialogue. Some assert that China will have insufficient land and water resources to feed itself, posing a threat to global food security if they seek to increase food imports. Others believe resources are plentiful. Without quantitative data, it is difficult to discern if these concerns are realistic or overly dramatized. This research presents a quantitative approach using data assimilation techniques to characterize hydrologic fluxes, crop water use (defined as crop evapotranspiration), and agricultural land use at 0.5 by 0.5 degree resolution and applies the methodology in China using data from around the year 2000. The approach uses the principles of water balance and of crop water requirements to assimilate existing data with a least-squares estimation technique, producing new estimates of water and land use variables that are physically consistent while minimizing differences from measured data. We argue that this technique for estimating water fluxes and agricultural land use can provide a useful basis for resource management modeling and policy, both in China and around the world.

  3. In vivo estimation of target registration errors during augmented reality laparoscopic surgery.

    PubMed

    Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2018-06-01

    Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

  4. [The concept of risk and its estimation].

    PubMed

    Zocchetti, C; Della Foglia, M; Colombi, A

    1996-01-01

    The concept of risk, in relation to human health, is a topic of primary interest for occupational health professionals. A new legislation recently established in Italy (626/94) according to European Community directives in the field of Preventive Medicine, called attention to this topic, and in particular to risk assessment and evaluation. Motivated by this context and by the impression that the concept of risk is frequently misunderstood, the present paper has two aims: the identification of the different meanings of the term "risk" in the new Italian legislation and the critical discussion of some commonly used definitions; and the proposal of a general definition, with the specification of a mathematical expression for quantitative risk estimation. The term risk (and risk estimation, assessment, or evaluation) has mainly referred to three different contexts: hazard identification, exposure assessment, and adverse health effects occurrence. Unfortunately, there are contexts in the legislation in which it is difficult to identify the true meaning of the term. This might cause equivocal interpretations and erroneous applications of the law because hazard evaluation, exposure assessment, and adverse health effects identification are completely different topics that require integrated but distinct approaches to risk management. As far as a quantitative definition of risk is of concern, we suggest an algorithm which connects the three basic risk elements (hazard, exposure, adverse health effects) by means of their probabilities of occurrence: the probability of being exposed (to a definite dose) given that a specific hazard is present (Pr(e[symbol: see text]p)), and the probability of occurrence of an adverse health effect as a consequence of that exposure (Pr(d[symbol: see text]e)). Using these quantitative components, risk can be defined as a sequence of measurable events that starts with hazard identification and terminates with disease occurrence; therefore, the following formal definition of risk is proposed: the probability of occurrence, in a given period of time, of an adverse health effect as a consequence of the existence of an hazard. In formula: R(d[symbol: see text]p) = Pr(e[symbol: see text]p) x Pr(d[symbol: see text]e). While Pr(e[symbol: see text]p) (exposure given hazard) must be evaluated in the situation under study, two alternatives exist for the estimation of the occurrence of adverse health effects (Pr(d[symbol: see text]e)): a "direct" estimation of the damage (Pr(d[symbol: see text]e) through formal epidemiologic studies conducted in the situation under observation; and an "indirect" estimation of Pr(d[symbol: see text]e) using information taken from the scientific literature (epidemiologic evaluations, dose-response relationships, extrapolations, ...). Both conditions are presented along with their respective advantages, disadvantages, and uncertainties. The usefulness of the proposed algorithm is discussed with respect to commonly used applications of risk assessment in occupational medicine; the relevance of time for risk estimation (both in the term of duration of observation, duration of exposure, and latency of effect) is briefly explained; and how the proposed algorithm takes into account (in terms of prevention and public health) both the etiologic relevance of the exposure and the consequences of exposure removal is highlighted. As a last comment, it is suggested that the diffuse application of good work practices (technical, behavioral, organizational, ...), or the exhaustive use of check lists, can be relevant in terms of improvement of prevention efficacy, but does not represent any quantitative procedure of risk assessment which, in any circumstance, must be considered the elective approach to adverse health effect prevention.

  5. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  6. Dynamical Stochastic Processes of Returns in Financial Markets

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Kim, Soo Yong; Lim, Gyuchang; Zhou, Junyuan; Yoon, Seung-Min

    2006-03-01

    We show how the evolution of probability distribution functions of the returns from the tick data of the Korean treasury bond futures (KTB) and the S&P 500 stock index can be described by means of the Fokker-Planck equation. We derive the Fokker- Planck equation from the estimated Kramers-Moyal coefficients estimated directly from the empirical data. By analyzing the statistics of the returns, we present the quantitative deterministic and random influences on both financial time series, for which we can give a simple physical interpretation. Finally, we remark that the diffusion coefficient should be significantly considered to make a portfolio.

  7. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  8. Short Course Introduction to Quantitative Mineral Resource Assessments

    USGS Publications Warehouse

    Singer, Donald A.

    2007-01-01

    This is an abbreviated text supplementing the content of three sets of slides used in a short course that has been presented by the author at several workshops. The slides should be viewed in the order of (1) Introduction and models, (2) Delineation and estimation, and (3) Combining estimates and summary. References cited in the slides are listed at the end of this text. The purpose of the three-part form of mineral resource assessments discussed in the accompanying slides is to make unbiased quantitative assessments in a format needed in decision-support systems so that consequences of alternative courses of action can be examined. The three-part form of mineral resource assessments was developed to assist policy makers evaluate the consequences of alternative courses of action with respect to land use and mineral-resource development. The audience for three-part assessments is a governmental or industrial policy maker, a manager of exploration, a planner of regional development, or similar decision-maker. Some of the tools and models presented here will be useful for selection of exploration sites, but that is a side benefit, not the goal. To provide unbiased information, we recommend the three-part form of mineral resource assessments where general locations of undiscovered deposits are delineated from a deposit type's geologic setting, frequency distributions of tonnages and grades of well-explored deposits serve as models of grades and tonnages of undiscovered deposits, and number of undiscovered deposits are estimated probabilistically by type. The internally consistent descriptive, grade and tonnage, deposit density, and economic models used in the design of the three-part form of assessments reduce the chances of biased estimates of the undiscovered resources. What and why quantitative resource assessments: The kind of assessment recommended here is founded in decision analysis in order to provide a framework for making decisions concerning mineral resources under conditions of uncertainty. What this means is that we start with the question of what kinds of questions is the decision maker trying to resolve and what forms of information would aid in resolving these questions. Some applications of mineral resource assessments: To plan and guide exploration programs, to assist in land use planning, to plan the location of infrastructure, to estimate mineral endowment, and to identify deposits that present special environmental challenges. Why not just rank prospects / areas? Need for financial analysis, need for comparison with other land uses, need for comparison with distant tracts of land, need to know how uncertain the estimates are, need for consideration of economic and environmental consequences of possible development. Our goal is to provide unbiased information useful to decision-makers.

  9. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    PubMed

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Comparative Application of PLS and PCR Methods to Simultaneous Quantitative Estimation and Simultaneous Dissolution Test of Zidovudine - Lamivudine Tablets.

    PubMed

    Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli

    2015-01-01

    In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.

  11. Semi-quantitative estimation of cellular SiO2 nanoparticles using flow cytometry combined with X-ray fluorescence measurements.

    PubMed

    Choi, Seo Yeon; Yang, Nuri; Jeon, Soo Kyung; Yoon, Tae Hyun

    2014-09-01

    In this study, we have demonstrated feasibility of a semi-quantitative approach for the estimation of cellular SiO2 nanoparticles (NPs), which is based on the flow cytometry measurements of their normalized side scattering intensity. In order to improve our understanding on the quantitative aspects of cell-nanoparticle interactions, flow cytometry, transmission electron microscopy, and X-ray fluorescence experiments were carefully performed for the HeLa cells exposed to SiO2 NPs with different core diameters, hydrodynamic sizes, and surface charges. Based on the observed relationships among the experimental data, a semi-quantitative cellular SiO2 NPs estimation method from their normalized side scattering and core diameters was proposed, which can be applied for the determination of cellular SiO2 NPs within their size-dependent linear ranges. © 2014 International Society for Advancement of Cytometry.

  12. Spatio-temporal models of mental processes from fMRI.

    PubMed

    Janoos, Firdaus; Machiraju, Raghu; Singh, Shantanu; Morocz, Istvan Ákos

    2011-07-15

    Understanding the highly complex, spatially distributed and temporally organized phenomena entailed by mental processes using functional MRI is an important research problem in cognitive and clinical neuroscience. Conventional analysis methods focus on the spatial dimension of the data discarding the information about brain function contained in the temporal dimension. This paper presents a fully spatio-temporal multivariate analysis method using a state-space model (SSM) for brain function that yields not only spatial maps of activity but also its temporal structure along with spatially varying estimates of the hemodynamic response. Efficient algorithms for estimating the parameters along with quantitative validations are given. A novel low-dimensional feature-space for representing the data, based on a formal definition of functional similarity, is derived. Quantitative validation of the model and the estimation algorithms is provided with a simulation study. Using a real fMRI study for mental arithmetic, the ability of this neurophysiologically inspired model to represent the spatio-temporal information corresponding to mental processes is demonstrated. Moreover, by comparing the models across multiple subjects, natural patterns in mental processes organized according to different mental abilities are revealed. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Quantitative assessment of future development of cooper/silver resources in the Kootenai National Forest, Idaho/Montana: Part I-Estimation of the copper and silver endowments

    USGS Publications Warehouse

    Spanski, G.T.

    1992-01-01

    Faced with an ever-increasing diversity of demand for the use of public lands, managers and planners are turning more often to a multiple-use approach to meet those demands. This approach requires the uses to be mutually compatible and to utilize the more valuable attributes or resource values of the land. Therefore, it is imperative that planners be provided with all available information on attribute and resource values in a timely fashion and in a format that facilitates a comparative evaluation. The Kootenai National Forest administration enlisted the U.S. Geological Survey and U.S. Bureau of Mines to perform a quantitative assessment of future copper/silver production potential within the forest from sediment-hosted copper deposits in the Revett Formation that are similar to those being mined at the Troy Mine near Spar Lake. The U.S. Geological Survey employed a quantitative assessment technique that compared the favorable host terrane in the Kootenai area with worldwide examples of known sediment-hosted copper deposits. The assessment produced probabilistic estimates of the number of undiscovered deposits that may be present in the area and of the copper and silver endowment that might be contained in them. Results of the assessment suggest that the copper/silver deposit potential is highest in the southwestern one-third of the forest. In this area there is an estimated 50 percent probability of at least 50 additional deposits occurring mostly within approximately 260,000 acres where the Revett Formation is thought to be present in the subsurface at depths of less than 1,500 meters. A Monte Carlo type simulation using data on the grade and tonnage characteristics of other known silver-rich, sediment-hosted copper deposits predicts a 50 percent probability that these undiscovered deposits will contain at least 19 million tonnes of copper and 100,000 tonnes of silver. Combined with endowments estimated for identified, but not thoroughly explored deposits, and deposits that might also occur in the remaining area of the forest, the endowment potential increases to 23 million tonnes of copper and 190,000 tonnes of silver. ?? 1992 Oxford University Press.

  14. The estimation of quantitative parameters of oligonucleotides immobilization on mica surface

    NASA Astrophysics Data System (ADS)

    Sharipov, T. I.; Bakhtizin, R. Z.

    2017-05-01

    Immobilization of nucleic acids on the surface of various materials is increasingly being used in research and some practical applications. Currently, the DNA chip technology is rapidly developing. The basis of the immobilization process can be both physical adsorption and chemisorption. A useful way to control the immobilization of nucleic acids on a surface is to use atomic force microscopy. It allows you to investigate the topography of the surface by its direct imaging with high resolution. Usually, to fix the DNA on the surface of mica are used cations which mediate the interaction between the mica surface and the DNA molecules. In our work we have developed a method for estimation of quantitative parameter of immobilization of oligonucleotides is their degree of aggregation depending on the fixation conditions on the surface of mica. The results on study of aggregation of oligonucleotides immobilized on mica surface will be presented. The single oligonucleotides molecules have been imaged clearly, whereas their surface areas have been calculated and calibration curve has been plotted.

  15. Quantitative aspects of radon daughter exposure and lung cancer in underground miners.

    PubMed Central

    Edling, C; Axelson, O

    1983-01-01

    Epidemiological studies have shown an excessive incidence of lung cancer in miners with exposure to radon daughters. The various risk estimates have ranged from six to 47 excess cases per 10(6) person years and working level month, but the effect of smoking has not been fully evaluated. The present study, among a group of iron ore miners, is an attempt to obtain quantitative information about the risk of lung cancer due to radon and its daughters among smoking and non-smoking miners. The results show a considerable risk for miners to develop lung cancer; even non-smoking miners seem to be at a rather high risk. An additive effect of smoking and exposure to radon daughters is indicated and an estimate of about 30-40 excess cases per 10(6) person years and working level month seems to apply on a life time basis to both smoking and non-smoking miners aged over 50. PMID:6830715

  16. Ocean Heat Content Reveals Secrets of Fish Migrations

    PubMed Central

    Luo, Jiangang; Ault, Jerald S.; Shay, Lynn K.; Hoolihan, John P.; Prince, Eric D.; Brown, Craig A.; Rooker, Jay R.

    2015-01-01

    For centuries, the mechanisms surrounding spatially complex animal migrations have intrigued scientists and the public. We present a new methodology using ocean heat content (OHC), a habitat metric that is normally a fundamental part of hurricane intensity forecasting, to estimate movements and migration of satellite-tagged marine fishes. Previous satellite-tagging research of fishes using archival depth, temperature and light data for geolocations have been too coarse to resolve detailed ocean habitat utilization. We combined tag data with OHC estimated from ocean circulation and transport models in an optimization framework that substantially improved geolocation accuracy over SST-based tracks. The OHC-based movement track provided the first quantitative evidence that many of the tagged highly migratory fishes displayed affinities for ocean fronts and eddies. The OHC method provides a new quantitative tool for studying dynamic use of ocean habitats, migration processes and responses to environmental changes by fishes, and further, improves ocean animal tracking and extends satellite-based animal tracking data for other potential physical, ecological, and fisheries applications. PMID:26484541

  17. A pre-edge analysis of Mn K-edge XANES spectra to help determine the speciation of manganese in minerals and glasses

    NASA Astrophysics Data System (ADS)

    Chalmin, E.; Farges, F.; Brown, G. E.

    2009-01-01

    High-resolution manganese K-edge X-ray absorption near edge structure spectra were collected on a set of 40 Mn-bearing minerals. The pre-edge feature information (position, area) was investigated to extract as much as possible quantitative valence and symmetry information for manganese in various “test” and “unknown” minerals and glasses. The samples present a range of manganese symmetry environments (tetrahedral, square planar, octahedral, and cubic) and valences (II to VII). The extraction of the pre-edge information is based on a previous multiple scattering and multiplet calculations for model compounds. Using the method described in this study, a robust estimation of the manganese valence could be obtained from the pre-edge region at 5% accuracy level. This method applied to 20 “test” compounds (such as hausmannite and rancieite) and to 15 “unknown” compounds (such as axinite and birnessite) provides a quantitative estimate of the average valence of manganese in complex minerals and silicate glasses.

  18. Quantitative evaluation of spatial scale of carrier trapping at grain boundary by GHz-microwave dielectric loss spectroscopy

    NASA Astrophysics Data System (ADS)

    Choi, W.; Tsutsui, Y.; Miyakai, T.; Sakurai, T.; Seki, S.

    2017-11-01

    Charge carrier mobility is an important primary parameter for the electronic conductive materials, and the intrinsic limit of the mobility has been hardly access by conventional direct-current evaluation methods. In the present study, intra-grain hole mobility of pentacene thin films was estimated quantitatively using microwave-based dielectric loss spectroscopy (time-resolved microwave conductivity measurement) in alternating current mode of charge carrier local motion. Metal-insulator-semiconductor devices were prepared with different insulating polymers or substrate temperature upon vacuum deposition of the pentacene layer, which afforded totally four different grain-size conditions of pentacene layers. Under the condition where the local motion was determined by interfacial traps at the pentacene grain boundaries (grain-grain interfaces), the observed hole mobilities were plotted against the grain sizes, giving an excellent correlation fit successfully by a parabolic function representative of the boarder length. Consequently, the intra-grain mobility and trap-release time of holes were estimated as 15 cm2 V-1 s-1 and 9.4 ps.

  19. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  20. A new method to assess the sustainability performance of events: Application to the 2014 World Orienteering Championship

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scrucca, Flavio; Severi, Claudio; Galvan, Nicola

    Nowadays an increasing attention of public and private agencies to the sustainability performance of events is observed, since it is recognized as a key issue in the context of sustainable development. Assessing the sustainability performance of events involves environmental, social and economic aspects; their impacts are complex and a quantitative assessment is often difficult. This paper presents a new quali-quantitative method developed to measure the sustainability of events, taking into account all its potential impacts. The 2014 World Orienteering Championship, held in Italy, was selected to test the proposed evaluation methodology. The total carbon footprint of the event was 165.34more » tCO{sub 2}eq and the avoided emissions were estimated as being 46 tCO{sub 2}eq. The adopted quali-quantitative method resulted to be efficient in assessing the sustainability impacts and can be applied for the evaluation of similar events. - Highlights: • A quali-quantitative method to assess events' sustainability is presented. • All the methodological issues related to the method are explained. • The method is used to evaluate the sustainability of an international sports event. • The method resulted to be valid to assess the event's sustainability level. • The carbon footprint of the event has been calculated.« less

  1. Assimilative capacity-based emission load management in a critically polluted industrial cluster.

    PubMed

    Panda, Smaranika; Nagendra, S M Shiva

    2017-12-01

    In the present study, a modified approach was adopted to quantify the assimilative capacity (i.e., the maximum emission an area can take without violating the permissible pollutant standards) of a major industrial cluster (Manali, India) and to assess the effectiveness of adopted air pollution control measures at the region. Seasonal analysis of assimilative capacity was carried out corresponding to critical, high, medium, and low pollution levels to know the best and worst conditions for industrial operations. Bottom-up approach was employed to quantify sulfur dioxide (SO 2 ), nitrogen dioxide (NO 2 ), and particulate matter (aerodynamic diameter <10 μm; PM 10 ) emissions at a fine spatial resolution of 500 × 500 m 2 in Manali industrial cluster. AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model), an U.S. Environmental Protection Agency (EPA) regulatory model, was used for estimating assimilative capacity. Results indicated that 22.8 tonnes/day of SO 2 , 7.8 tonnes/day of NO 2 , and 7.1 tonnes/day of PM 10 were emitted from the industries of Manali. The estimated assimilative capacities for SO 2 , NO 2 , and PM 10 were found to be 16.05, 17.36, and 19.78 tonnes/day, respectively. It was observed that the current SO 2 emissions were exceeding the estimated safe load by 6.7 tonnes/day, whereas PM 10 and NO 2 were within the safe limits. Seasonal analysis of assimilative capacity showed that post-monsoon had the lowest load-carrying capacity, followed by winter, summer, and monsoon seasons, and the allowable SO 2 emissions during post-monsoon and winter seasons were found to be 35% and 26% lower, respectively, when compared with monsoon season. The authors present a modified approach for quantitative estimation of assimilative capacity of a critically polluted Indian industrial cluster. The authors developed a geo-coded fine-resolution PM 10 , NO 2 , and SO 2 emission inventory for Manali industrial area and further quantitatively estimated its season-wise assimilative capacities corresponding to various pollution levels. This quantitative representation of assimilative capacity (in terms of emissions), when compared with routine qualitative representation, provides better data for quantifying carrying capacity of an area. This information helps policy makers and regulatory authorities to develop an effective mitigation plan for air pollution abatement.

  2. Development of an agricultural job-exposure matrix for British Columbia, Canada.

    PubMed

    Wood, David; Astrakianakis, George; Lang, Barbara; Le, Nhu; Bert, Joel

    2002-09-01

    Farmers in British Columbia (BC), Canada have been shown to have unexplained elevated proportional mortality rates for several cancers. Because agricultural exposures have never been documented systematically in BC, a quantitative agricultural Job-exposure matrix (JEM) was developed containing exposure assessments from 1950 to 1998. This JEM was developed to document historical exposures and to facilitate future epidemiological studies. Available information regarding BC farming practices was compiled and checklists of potential exposures were produced for each crop. Exposures identified included chemical, biological, and physical agents. Interviews with farmers and agricultural experts were conducted using the checklists as a starting point. This allowed the creation of an initial or 'potential' JEM based on three axes: exposure agent, 'type of work' and time. The 'type of work' axis was determined by combining several variables: region, crop, job title and task. This allowed for a complete description of exposures. Exposure assessments were made quantitatively, where data allowed, or by a dichotomous variable (exposed/unexposed). Quantitative calculations were divided into re-entry and application scenarios. 'Re-entry' exposures were quantified using a standard exposure model with some modification while application exposure estimates were derived using data from the North American Pesticide Handlers Exposure Database (PHED). As expected, exposures differed between crops and job titles both quantitatively and qualitatively. Of the 290 agents included in the exposure axis; 180 were pesticides. Over 3000 estimates of exposure were conducted; 50% of these were quantitative. Each quantitative estimate was at the daily absorbed dose level. Exposure estimates were then rated as high, medium, or low based on comparing them with their respective oral chemical reference dose (RfD) or Acceptable Daily Intake (ADI). This data was mainly obtained from the US Environmental Protection Agency (EPA) Integrated Risk Information System database. Of the quantitative estimates, 74% were rated as low (< 100%) and only 10% were rated as high (>500%). The JEM resulting from this study fills a void concerning exposures for BC farmers and farm workers. While only limited validation of assessments were possible, this JEM can serve as a benchmark for future studies. Preliminary analysis at the BC Cancer Agency (BCCA) using the JEM with prostate cancer records from a large cancer and occupation study/survey has already shown promising results. Development of this JEM provides a useful model for developing historical quantitative exposure estimates where is very little documented information available.

  3. Optimization of Long Range Major Rehabilitation of Airfield Pavements.

    DTIC Science & Technology

    1983-01-01

    the network level, the mathematical representation of choosing those projects that maximize the sum of the user value weighted structural performanceof ...quantitatively be compared . In addition, an estimate of an appropriate level of funding for the entire system can be made. The simple example shows a...pavement engineers to only working in the present. The designing and comparing of pavement maintenance and rehabilitation alternatives remain directed

  4. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    PubMed

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  5. Abort Trigger False Positive and False Negative Analysis Methodology for Threshold-Based Abort Detection

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon

    2015-01-01

    This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.

  6. Mid-Pliocene Planktic Foraminifer Census Data and Alkenone Unsaturation Indices from Ocean Drilling Program Hole 677A

    USGS Publications Warehouse

    Robinson, Marci; Caballero, Rocio; Pohlman, Emily; Herbert, Timothy; Peck, Victoria; Dowsett, Harry

    2008-01-01

    The U.S. Geological Survey is conducting a long-term study of mid-Pliocene climatic and oceanographic conditions. One of the key elements of the study involves the use of quantitative composition of planktic foraminifer assemblages in conjunction with other proxies to constrain estimates of sea-surface temperature (SST) and to identify major oceanographic boundaries and water masses. Raw census data are made available as soon as possible after analysis through a series of reports that provide the basic data for future work. In this report we present raw census data (table 1) for planktic foraminifer assemblages in 14 samples from Ocean Drilling Program (ODP) Hole 677A. We also present alkenone unsaturation index (UK'37) analyses for 89 samples from ODP Hole 677A (table 2). ODP Hole 677A is located in the Panama basin, due west of Ecuador at 1?12.138'N., 83?44.220'W., in 3461.2 meters of water (fig. 1). A variety of statistical methods have been developed to transform foraminiferal census data in Pliocene sequences into quantitative estimates of Pliocene SST. Details of statistical techniques, taxonomic groupings, and oceanographic interpretations are presented in more formal publications (Dowsett and Poore, 1990, 1991; Dowsett, 1991, 2007a,b; Dowsett and Robinson, 1998, 2007; Dowsett and others, 1996, 1999).

  7. Estimating research productivity and quality in assistive technology: a bibliometric analysis spanning four decades.

    PubMed

    Ryan, Cindy; Tewey, Betsy; Newman, Shelia; Turner, Tracy; Jaeger, Robert J

    2004-12-01

    Conduct a quantitative assessment of the number of papers contained in MEDLINE related to selected types of assistive technology (AT), and to identify journals publishing significant numbers of papers related to AT, and evaluate them with quantitative productivity and quality measures. Consecutive sample of all papers in MEDLINE identified by standard medical subject headings for selected types of AT from 1963-2003. Number of journals carrying AT papers, papers per journal (both total number and those specific to AT), journal impact factor, circulation, and number of AT citations per year over time for each area of AT. We present search terms, estimates of the numbers of AT citations in MEDLINE, the journals most likely to contain articles related to AT, journal impact factors, and journal circulations (when available). We also present the number of citations in various areas of AT over time from 1963-2003. Suggestions are presented for possible future modifications of the MEDLINE controlled vocabulary, based on terminology used in existing AT classifications schemes, such as ISO 9999. Research papers in the areas of AT examined showed publication across a wide variety of journals. There are a number of journals publishing articles in AT that have impact factors above the median. Some areas of AT have shown an increase in publications per year over time, while others have shown a more constant level of productivity.

  8. Dual-core mass-balance approach for evaluating mercury and210Pb atmospheric fallout and focusing to lakes

    USGS Publications Warehouse

    Van Metre, P.C.; Fuller, C.C.

    2009-01-01

    Determining atmospheric deposition rates of mercury and other contaminants using lake sediment cores requires a quantitative understanding of sediment focusing. Here we present a novel approach that solves mass-balance equations for two cores algebraically to estimate contaminant contributions to sediment from direct atmospheric fallout and from watershed and in-lake focusing. The model is applied to excess 210Pb and Hg in cores from Hobbs Lake, a high-altitude lake in Wyoming. Model results for excess 210Pb are consistent with estimates of fallout and focusing factors computed using excess 210Pb burdens in lake cores and soil cores from the watershed and model results for Hg fallout are consistent with fallout estimated using the soil-core-based 210Pb focusing factors. The lake cores indicate small increases in mercury deposition beginning in the late 1800s and large increases after 1940, with the maximum at the tops of the cores of 16-20 ??g/m 2year. These results suggest that global Hg emissions and possibly regional emissions in the western United States are affecting the north-central Rocky Mountains. Hg fallout estimates are generally consistent with fallout reported from an ice core from the nearby Upper Fremont Glacier, but with several notable differences. The model might not work for lakes with complex geometries and multiple sediment inputs, but for lakes with simple geometries, like Hobbs, it can provide a quantitative approach for evaluating sediment focusing and estimating contaminant fallout.

  9. Assessing non-additive effects in GBLUP model.

    PubMed

    Vieira, I C; Dos Santos, J P R; Pires, L P M; Lima, B M; Gonçalves, F M A; Balestre, M

    2017-05-10

    Understanding non-additive effects in the expression of quantitative traits is very important in genotype selection, especially in species where the commercial products are clones or hybrids. The use of molecular markers has allowed the study of non-additive genetic effects on a genomic level, in addition to a better understanding of its importance in quantitative traits. Thus, the purpose of this study was to evaluate the behavior of the GBLUP model in different genetic models and relationship matrices and their influence on the estimates of genetic parameters. We used real data of the circumference at breast height in Eucalyptus spp and simulated data from a population of F 2 . Three commonly reported kinship structures in the literature were adopted. The simulation results showed that the inclusion of epistatic kinship improved prediction estimates of genomic breeding values. However, the non-additive effects were not accurately recovered. The Fisher information matrix for real dataset showed high collinearity in estimates of additive, dominant, and epistatic variance, causing no gain in the prediction of the unobserved data and convergence problems. Estimates presented differences of genetic parameters and correlations considering the different kinship structures. Our results show that the inclusion of non-additive effects can improve the predictive ability or even the prediction of additive effects. However, the high distortions observed in the variance estimates when the Hardy-Weinberg equilibrium assumption is violated due to the presence of selection or inbreeding can converge at zero gains in models that consider epistasis in genomic kinship.

  10. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  11. Computation of the three-dimensional medial surface dynamics of the vocal folds.

    PubMed

    Döllinger, Michael; Berry, David A

    2006-01-01

    To increase our understanding of pathological and healthy voice production, quantitative measurement of the medial surface dynamics of the vocal folds is significant, albeit rarely performed because of the inaccessibility of the vocal folds. Using an excised hemilarynx methodology, a new calibration technique, herein referred to as the linear approximate (LA) method, was introduced to compute the three-dimensional coordinates of fleshpoints along the entire medial surface of the vocal fold. The results were compared with results from the direct linear transform. An associated error estimation was presented, demonstrating the improved accuracy of the new method. A test on real data was reported including computation of quantitative measurements of vocal fold dynamics.

  12. Current methods and advances in bone densitometry

    NASA Technical Reports Server (NTRS)

    Guglielmi, G.; Gluer, C. C.; Majumdar, S.; Blunt, B. A.; Genant, H. K.

    1995-01-01

    Bone mass is the primary, although not the only, determinant of fracture. Over the past few years a number of noninvasive techniques have been developed to more sensitively quantitate bone mass. These include single and dual photon absorptiometry (SPA and DPA), single and dual X-ray absorptiometry (SXA and DXA) and quantitative computed tomography (QCT). While differing in anatomic sites measured and in their estimates of precision, accuracy, and fracture discrimination, all of these methods provide clinically useful measurements of skeletal status. It is the intent of this review to discuss the pros and cons of these techniques and to present the new applications of ultrasound (US) and magnetic resonance (MRI) in the detection and management of osteoporosis.

  13. Estimations of BCR-ABL/ABL transcripts by quantitative PCR in chronic myeloid leukaemia after allogeneic bone marrow transplantation and donor lymphocyte infusion.

    PubMed

    Otazú, Ivone B; Tavares, Rita de Cassia B; Hassan, Rocío; Zalcberg, Ilana; Tabak, Daniel G; Seuánez, Héctor N

    2002-02-01

    Serial assays of qualitative (multiplex and nested) and quantitative PCR were carried out for detecting and estimating the level of BCR-ABL transcripts in 39 CML patients following bone marrow transplantation. Seven of these patients, who received donor lymphocyte infusions (DLIs) following to relapse, were also monitored. Quantitative estimates of BCR-ABL transcripts were obtained by co-amplification with a competitor sequence. Estimates of ABL transcripts were used, an internal control and the ratio BCR-ABL/ABL was thus estimated for evaluating the kinetics of residual clones. Twenty four patients were followed shortly after BMT; two of these patients were in cytogenetic relapse coexisting with very high BCR-ABL levels while other 22 were in clinical, haematologic and cytogenetic remission 2-42 months after BMT. In this latter group, seven patients showed a favourable clinical-haematological progression in association with molecular remission while in 14 patients quantitative PCR assays indicated molecular relapse that was not associated with an early cytogenetic-haematologic relapse. BCR-ABL/ABL levels could not be correlated with presence of GVHD in 24 patients after BMT. In all seven patients treated with DLI, high levels of transcripts were detected at least 4 months before the appearance of clinical haematological relapse. Following DLI, five of these patients showed decreasing transcript levels from 2 to 5 logs between 4 and 12 months. In eight other patients studied long after BMT, five showed molecular relapse up to 117 months post-BMT and only one showed cytogenetic relapse. Our findings indicated that quantitative estimates of BCR-ABL transcripts were valuable for monitoring minimal residual disease in each patient.

  14. Reanalysis of the DEMS Nested Case-Control Study of Lung Cancer and Diesel Exhaust: Suitability for Quantitative Risk Assessment

    PubMed Central

    Crump, Kenny S; Van Landingham, Cynthia; Moolgavkar, Suresh H; McClellan, Roger

    2015-01-01

    The International Agency for Research on Cancer (IARC) in 2012 upgraded its hazard characterization of diesel engine exhaust (DEE) to “carcinogenic to humans.” The Diesel Exhaust in Miners Study (DEMS) cohort and nested case-control studies of lung cancer mortality in eight U.S. nonmetal mines were influential in IARC’s determination. We conducted a reanalysis of the DEMS case-control data to evaluate its suitability for quantitative risk assessment (QRA). Our reanalysis used conditional logistic regression and adjusted for cigarette smoking in a manner similar to the original DEMS analysis. However, we included additional estimates of DEE exposure and adjustment for radon exposure. In addition to applying three DEE exposure estimates developed by DEMS, we applied six alternative estimates. Without adjusting for radon, our results were similar to those in the original DEMS analysis: all but one of the nine DEE exposure estimates showed evidence of an association between DEE exposure and lung cancer mortality, with trend slopes differing only by about a factor of two. When exposure to radon was adjusted, the evidence for a DEE effect was greatly diminished, but was still present in some analyses that utilized the three original DEMS DEE exposure estimates. A DEE effect was not observed when the six alternative DEE exposure estimates were utilized and radon was adjusted. No consistent evidence of a DEE effect was found among miners who worked only underground. This article highlights some issues that should be addressed in any use of the DEMS data in developing a QRA for DEE. PMID:25857246

  15. Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery

    NASA Astrophysics Data System (ADS)

    Woods, B. K.; Wei, L. H.; Connor, T. C.

    2014-12-01

    With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.

  16. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  17. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    NASA Technical Reports Server (NTRS)

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.

    2010-01-01

    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.

  18. The hormesis database: the occurrence of hormetic dose responses in the toxicological literature.

    PubMed

    Calabrese, Edward J; Blain, Robyn B

    2011-10-01

    In 2005 we published an assessment of dose responses that satisfied a priori evaluative criteria for inclusion within the relational retrieval hormesis database (Calabrese and Blain, 2005). The database included information on study characteristics (e.g., biological model, gender, age and other relevant aspects, number of doses, dose distribution/range, quantitative features of the dose response, temporal features/repeat measures, and physical/chemical properties of the agents). The 2005 article covered information for about 5000 dose responses; the present article has been expanded to cover approximately 9000 dose responses. This assessment extends and strengthens the conclusion of the 2005 paper that the hormesis concept is broadly generalizable, being independent of biological model, endpoint measured and chemical class/physical agent. It also confirmed the definable quantitative features of hormetic dose responses in which the strong majority of dose responses display maximum stimulation less than twice that of the control group and a stimulatory width that is within approximately 10-20-fold of the estimated toxicological or pharmacological threshold. The remarkable consistency of the quantitative features of the hormetic dose response suggests that hormesis may provide an estimate of biological plasticity that is broadly generalized across plant, microbial and animal (invertebrate and vertebrate) models. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.

    2017-01-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369

  20. Optimal experiment design for magnetic resonance fingerprinting.

    PubMed

    Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L

    2016-08-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.

  1. Quantitative imaging of aggregated emulsions.

    PubMed

    Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J

    2006-02-28

    Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.

  2. Quantitative Estimate of the Relation Between Rolling Resistance on Fuel Consumption of Class 8 Tractor Trailers Using Both New and Retreaded Tires (SAE Paper 2014-01-2425)

    EPA Science Inventory

    Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.

  3. 77 FR 70727 - Endangered and Threatened Wildlife and Plants; 90-Day Finding on a Petition to List the African...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-27

    ... indirectly through changes in regional climate; and (b) Quantitative research on the relationship of food...). Population Estimates The most quantitative estimate of the historic size of the African lion population... research conducted by Chardonnet et al., three subpopulations were described as consisting of 18 groups...

  4. Quantitative estimate of commercial fish enhancement by seagrass habitat in southern Australia

    NASA Astrophysics Data System (ADS)

    Blandon, Abigayil; zu Ermgassen, Philine S. E.

    2014-03-01

    Seagrass provides many ecosystem services that are of considerable value to humans, including the provision of nursery habitat for commercial fish stock. Yet few studies have sought to quantify these benefits. As seagrass habitat continues to suffer a high rate of loss globally and with the growing emphasis on compensatory restoration, valuation of the ecosystem services associated with seagrass habitat is increasingly important. We undertook a meta-analysis of juvenile fish abundance at seagrass and control sites to derive a quantitative estimate of the enhancement of juvenile fish by seagrass habitats in southern Australia. Thirteen fish of commercial importance were identified as being recruitment enhanced in seagrass habitat, twelve of which were associated with sufficient life history data to allow for estimation of total biomass enhancement. We applied von Bertalanffy growth models and species-specific mortality rates to the determined values of juvenile enhancement to estimate the contribution of seagrass to commercial fish biomass. The identified species were enhanced in seagrass by 0.98 kg m-2 y-1, equivalent to ˜$A230,000 ha-1 y-1. These values represent the stock enhancement where all fish species are present, as opposed to realized catches. Having accounted for the time lag between fish recruiting to a seagrass site and entering the fishery and for a 3% annual discount rate, we find that seagrass restoration efforts costing $A10,000 ha-1 have a potential payback time of less than five years, and that restoration costing $A629,000 ha-1 can be justified on the basis of enhanced commercial fish recruitment where these twelve fish species are present.

  5. Estimation of sample size and testing power (part 5).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  6. A new mean estimator using auxiliary variables for randomized response models

    NASA Astrophysics Data System (ADS)

    Ozgul, Nilgun; Cingi, Hulya

    2013-10-01

    Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.

  7. Estimating the number of animals in wildlife populations

    USGS Publications Warehouse

    Lancia, R.A.; Kendall, W.L.; Pollock, K.H.; Nichols, J.D.; Braun, Clait E.

    2005-01-01

    INTRODUCTION In 1938, Howard M. Wight devoted 9 pages, which was an entire chapter in the first wildlife management techniques manual, to what he termed 'census' methods. As books and chapters such as this attest, the volume of literature on this subject has grown tremendously. Abundance estimation remains an active area of biometrical research, as reflected in the many differences between this chapter and the similar contribution in the previous manual. Our intent in this chapter is to present an overview of the basic and most widely used population estimation techniques and to provide an entree to the relevant literature. Several possible approaches could be taken in writing a chapter dealing with population estimation. For example, we could provide a detailed treatment focusing on statistical models and on derivation of estimators based on these models. Although a chapter using this approach might provide a valuable reference for quantitative biologists and biometricians, it would be of limited use to many field biologists and wildlife managers. Another approach would be to focus on details of actually applying different population estimation techniques. This approach would include both field application (e.g., how to set out a trapping grid or conduct an aerial survey) and detailed instructions on how to use the resulting data with appropriate estimation equations. We are reluctant to attempt such an approach, however, because of the tremendous diversity of real-world field situations defined by factors such as the animal being studied, habitat, available resources, and because of our resultant inability to provide detailed instructions for all possible cases. We believe it is more useful to provide the reader with the conceptual basis underlying estimation methods. Thus, we have tried to provide intuitive explanations for how basic methods work. In doing so, we present relevant estimation equations for many methods and provide citations of more detailed treatments covering both statistical considerations and field applications. We have chosen to present methods that are representative of classes of estimators, rather than address every available method. Our hope is that this chapter will provide the reader with enough background to make an informed decision about what general method(s) will likely perform well in any particular field situation. Readers with a more quantitative background may then be able to consult detailed references and tailor the selected method to suit their particular needs. Less quantitative readers should consult a biometrician, preferably one with experience in wildlife studies, for this 'tailoring,' with the hope they will be able to do so with a basic understanding of the general method, thereby permitting useful interaction and discussion with the biometrician. SUMMARY Estimating the abundance or density of animals in wild populations is not a trivial matter. Virtually all techniques involve the basic problem of estimating the probability of seeing, capturing, or otherwise detecting animals during some type of survey and, in many cases, sampling concerns as well. In the case of indices, the detection probability is assumed to be constant (but unknown). We caution against use of indices unless this assumption can be verified for the comparison(s) of interest. In the case of population estimation, many methods have been developed over the years to estimate the probability of detection associated with various kinds of count statistics. Techniques range from complete counts, where sampling concerns often dominate, to incomplete counts where detection probabilities are also important. Some examples of the latter are multiple observers, removal methods, and capture-recapture. Before embarking on a survey to estimate the size of a population, one must understand clearly what information is needed and for what purpose the information will be used. The key to derivin

  8. Filter design for cancellation of baseline-fluctuation in needle EMG recordings.

    PubMed

    Rodríguez-Carreño, I; Malanda-Trigueros, A; Gila-Useros, L; Navallas-Irujo, J; Rodríguez-Falces, J

    2006-01-01

    Appropriate cancellation of the baseline fluctuation (BLF) is an important issue when recording EMG signals as it may degrade signal quality and distort qualitative and quantitative analysis. We present a novel filter-design approach for automatic cancellation of the BLF based on several signal processing techniques used sequentially. The methodology is to estimate the spectral content of the BLF, and then to use this estimation to design a high-pass FIR filter that cancel the BLF present in the signal. Two merit figures are devised for measuring the degree of BLF present in an EMG record. These figures are used to compare our method with the conventional approach, which naively considers the baseline course to be of constant (without any fluctuation) potential shift. Applications of the technique on real and simulated EMG signals show the superior performance of our approach in terms of both visual inspection and the merit figures.

  9. Quantitative microscopy of the lung: a problem-based approach. Part 2: stereological parameters and study designs in various diseases of the respiratory tract.

    PubMed

    Mühlfeld, Christian; Ochs, Matthias

    2013-08-01

    Design-based stereology provides efficient methods to obtain valuable quantitative information of the respiratory tract in various diseases. However, the choice of the most relevant parameters in a specific disease setting has to be deduced from the present pathobiological knowledge. Often it is difficult to express the pathological alterations by interpretable parameters in terms of volume, surface area, length, or number. In the second part of this companion review article, we analyze the present pathophysiological knowledge about acute lung injury, diffuse parenchymal lung diseases, emphysema, pulmonary hypertension, and asthma to come up with recommendations for the disease-specific application of stereological principles for obtaining relevant parameters. Worked examples with illustrative images are used to demonstrate the work flow, estimation procedure, and calculation and to facilitate the practical performance of equivalent analyses.

  10. Physiological frailty index (PFI): quantitative in-life estimate of individual biological age in mice.

    PubMed

    Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V

    2017-03-19

    The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.

  11. Soft sensor based composition estimation and controller design for an ideal reactive distillation column.

    PubMed

    Vijaya Raghavan, S R; Radhakrishnan, T K; Srinivasan, K

    2011-01-01

    In this research work, the authors have presented the design and implementation of a recurrent neural network (RNN) based inferential state estimation scheme for an ideal reactive distillation column. Decentralized PI controllers are designed and implemented. The reactive distillation process is controlled by controlling the composition which has been estimated from the available temperature measurements using a type of RNN called Time Delayed Neural Network (TDNN). The performance of the RNN based state estimation scheme under both open loop and closed loop have been compared with a standard Extended Kalman filter (EKF) and a Feed forward Neural Network (FNN). The online training/correction has been done for both RNN and FNN schemes for every ten minutes whenever new un-trained measurements are available from a conventional composition analyzer. The performance of RNN shows better state estimation capability as compared to other state estimation schemes in terms of qualitative and quantitative performance indices. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A geochemical view into continental palaeotemperatures of the end-Permian using oxygen and hydrogen isotope composition of secondary silica in chert rubble breccia: Kaibab Formation, Grand Canyon (USA).

    PubMed

    Kenny, Ray

    2018-01-16

    The upper carbonate member of the Kaibab Formation in northern Arizona (USA) was subaerially exposed during the end Permian and contains fractured and zoned chert rubble lag deposits typical of karst topography. The karst chert rubble has secondary (authigenic) silica precipitates suitable for estimating continental weathering temperatures during the end Permian karst event. New oxygen and hydrogen isotope ratios of secondary silica precipitates in the residual rubble breccia: (1) yield continental palaeotemperature estimates between 17 and 22 °C; and, (2) indicate that meteoric water played a role in the crystallization history of the secondary silica. The continental palaeotemperatures presented herein are broadly consistent with a global mean temperature estimate of 18.2 °C for the latest Permian derived from published climate system models. Few data sets are presently available that allow even approximate quantitative estimates of regional continental palaeotemperatures. These data provide a basis for better understanding the end Permian palaeoclimate at a seasonally-tropical latitude along the western shoreline of Pangaea.

  13. Average intragranular misorientation trends in polycrystalline materials predicted by a viscoplastic self-consistent approach

    DOE PAGES

    Lebensohn, Ricardo A.; Zecevic, Miroslav; Knezevic, Marko; ...

    2015-12-15

    Here, this work presents estimations of average intragranular fluctuations of lattice rotation rates in polycrystalline materials, obtained by means of the viscoplastic self-consistent (VPSC) model. These fluctuations give a tensorial measure of the trend of misorientation developing inside each single crystal grain representing a polycrystalline aggregate. We first report details of the algorithm implemented in the VPSC code to estimate these fluctuations, which are then validated by comparison with corresponding full-field calculations. Next, we present predictions of average intragranular fluctuations of lattice rotation rates for cubic aggregates, which are rationalized by comparison with experimental evidence on annealing textures of fccmore » and bcc polycrystals deformed in tension and compression, respectively, as well as with measured intragranular misorientation distributions in a Cu polycrystal deformed in tension. The orientation-dependent and micromechanically-based estimations of intragranular misorientations that can be derived from the present implementation are necessary to formulate sound sub-models for the prediction of quantitatively accurate deformation textures, grain fragmentation, and recrystallization textures using the VPSC approach.« less

  14. Methods to estimate the transfer of contaminants into recycling products - A case study from Austria.

    PubMed

    Knapp, Julika; Allesch, Astrid; Müller, Wolfgang; Bockreis, Anke

    2017-11-01

    Recycling of waste materials is desirable to reduce the consumption of limited primary resources, but also includes the risk of recycling unwanted, hazardous substances. In Austria, the legal framework demands secondary products must not present a higher risk than comparable products derived from primary resources. However, the act provides no definition on how to assess this risk potential. This paper describes the development of different quantitative and qualitative methods to estimate the transfer of contaminants in recycling processes. The quantitative methods comprise the comparison of concentrations of harmful substances in recycling products to corresponding primary products and to existing limit values. The developed evaluation matrix, which considers further aspects, allows for the assessment of the qualitative risk potential. The results show that, depending on the assessed waste fraction, particular contaminants can be critical. Their concentrations were higher than in comparable primary materials and did not comply with existing limit values. On the other hand, the results show that a long-term, well-established quality control system can assure compliance with the limit values. The results of the qualitative assessment obtained with the evaluation matrix support the results of the quantitative assessment. Therefore, the evaluation matrix can be suitable to quickly screen waste streams used for recycling to estimate their potential environmental and health risks. To prevent the transfer of contaminants into product cycles, improved data of relevant substances in secondary resources are necessary. In addition, regulations for material recycling are required to assure adequate quality control measures, including limit values. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Multiparametric Quantitative Ultrasound Imaging in Assessment of Chronic Kidney Disease.

    PubMed

    Gao, Jing; Perlman, Alan; Kalache, Safa; Berman, Nathaniel; Seshan, Surya; Salvatore, Steven; Smith, Lindsey; Wehrli, Natasha; Waldron, Levi; Kodali, Hanish; Chevalier, James

    2017-11-01

    To evaluate the value of multiparametric quantitative ultrasound imaging in assessing chronic kidney disease (CKD) using kidney biopsy pathologic findings as reference standards. We prospectively measured multiparametric quantitative ultrasound markers with grayscale, spectral Doppler, and acoustic radiation force impulse imaging in 25 patients with CKD before kidney biopsy and 10 healthy volunteers. Based on all pathologic (glomerulosclerosis, interstitial fibrosis/tubular atrophy, arteriosclerosis, and edema) scores, the patients with CKD were classified into mild (no grade 3 and <2 of grade 2) and moderate to severe (at least 2 of grade 2 or 1 of grade 3) CKD groups. Multiparametric quantitative ultrasound parameters included kidney length, cortical thickness, pixel intensity, parenchymal shear wave velocity, intrarenal artery peak systolic velocity (PSV), end-diastolic velocity (EDV), and resistive index. We tested the difference in quantitative ultrasound parameters among mild CKD, moderate to severe CKD, and healthy controls using analysis of variance, analyzed correlations of quantitative ultrasound parameters with pathologic scores and the estimated glomerular filtration rate (GFR) using Pearson correlation coefficients, and examined the diagnostic performance of quantitative ultrasound parameters in determining moderate CKD and an estimated GFR of less than 60 mL/min/1.73 m 2 using receiver operating characteristic curve analysis. There were significant differences in cortical thickness, pixel intensity, PSV, and EDV among the 3 groups (all P < .01). Among quantitative ultrasound parameters, the top areas under the receiver operating characteristic curves for PSV and EDV were 0.88 and 0.97, respectively, for determining pathologic moderate to severe CKD, and 0.76 and 0.86 for estimated GFR of less than 60 mL/min/1.73 m 2 . Moderate to good correlations were found for PSV, EDV, and pixel intensity with pathologic scores and estimated GFR. The PSV, EDV, and pixel intensity are valuable in determining moderate to severe CKD. The value of shear wave velocity in assessing CKD needs further investigation. © 2017 by the American Institute of Ultrasound in Medicine.

  16. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.

    2008-08-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.

  17. Extra dimension searches at hadron colliders to next-to-leading order-QCD

    NASA Astrophysics Data System (ADS)

    Kumar, M. C.; Mathews, Prakash; Ravindran, V.

    2007-11-01

    The quantitative impact of NLO-QCD corrections for searches of large and warped extra dimensions at hadron colliders are investigated for the Drell-Yan process. The K-factor for various observables at hadron colliders are presented. Factorisation, renormalisation scale dependence and uncertainties due to various parton distribution functions are studied. Uncertainties arising from the error on experimental data are estimated using the MRST parton distribution functions.

  18. An assessment of the reliability of quantitative genetics estimates in study systems with high rate of extra-pair reproduction and low recruitment.

    PubMed

    Bourret, A; Garant, D

    2017-03-01

    Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.

  19. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  20. HPLC analysis and standardization of Brahmi vati – An Ayurvedic poly-herbal formulation

    PubMed Central

    Mishra, Amrita; Mishra, Arun K.; Tiwari, Om Prakash; Jha, Shivesh

    2013-01-01

    Objectives The aim of the present study was to standardize Brahmi vati (BV) by simultaneous quantitative estimation of Bacoside A3 and Piperine adopting HPLC–UV method. BV very important Ayurvedic polyherbo formulation used to treat epilepsy and mental disorders containing thirty eight ingredients including Bacopa monnieri L. and Piper longum L. Materials and methods An HPLC–UV method was developed for the standardization of BV in light of simultaneous quantitative estimation of Bacoside A3 and Piperine, the major constituents of B. monnieri L. and P. longum L. respectively. The developed method was validated on parameters including linearity, precision, accuracy and robustness. Results The HPLC analysis showed significant increase in amount of Bacoside A3 and Piperine in the in-house sample of BV when compared with all three different marketed samples of the same. Results showed variations in the amount of Bacoside A3 and Piperine in different samples which indicate non-uniformity in their quality which will lead to difference in their therapeutic effects. Conclusion The outcome of the present investigation underlines the importance of standardization of Ayurvedic formulations. The developed method may be further used to standardize other samples of BV or other formulations containing Bacoside A3 and Piperine. PMID:24396246

  1. Investigation of Color in a Fusion Protein Using Advanced Analytical Techniques: Delineating Contributions from Oxidation Products and Process Related Impurities.

    PubMed

    Song, Hangtian; Xu, Jianlin; Jin, Mi; Huang, Chao; Bongers, Jacob; Bai, He; Wu, Wei; Ludwig, Richard; Li, Zhengjian; Tao, Li; Das, Tapan K

    2016-04-01

    Discoloration of protein therapeutics has drawn increased attention recently due to concerns of potential impact on quality and safety. Investigation of discoloration in protein therapeutics for comparability is particularly challenging primarily for two reasons. First, the description of color or discoloration is to certain extent a subjective characteristic rather than a quantitative attribute. Secondly, the species contributing to discoloration may arise from multiple sources and are typically present at trace levels. Our purpose is to development a systematic approach that allows effective identification of the color generating species in protein therapeutics. A yellow-brown discoloration event observed in a therapeutic protein was investigated by optical spectroscopy, ultra-performance liquid chromatography, and mass spectrometry (MS). Majority of the color generating species were identified as oxidatively modified protein. The location of the oxidized amino acid residues were identified by MS/MS. In addition, the impact of process-related impurities co-purified from media on discoloration was also investigated. Finally a semi-quantitative scale to estimate the contribution of each color source is presented, which revealed oxidized peptides are the major contributors. A systematic approach was developed for identification of the color generating species in protein therapeutics and for estimation of the contribution of each color source.

  2. Use of statistical and pharmacokinetic-pharmacodynamic modeling and simulation to improve decision-making: A section summary report of the trends and innovations in clinical trial statistics conference.

    PubMed

    Kimko, Holly; Berry, Seth; O'Kelly, Michael; Mehrotra, Nitin; Hutmacher, Matthew; Sethuraman, Venkat

    2017-01-01

    The application of modeling and simulation (M&S) methods to improve decision-making was discussed during the Trends & Innovations in Clinical Trial Statistics Conference held in Durham, North Carolina, USA on May 1-4, 2016. Uses of both pharmacometric and statistical M&S were presented during the conference, highlighting the diversity of the methods employed by pharmacometricians and statisticians to address a broad range of quantitative issues in drug development. Five presentations are summarized herein, which cover the development strategy of employing M&S to drive decision-making; European initiatives on best practice in M&S; case studies of pharmacokinetic/pharmacodynamics modeling in regulatory decisions; estimation of exposure-response relationships in the presence of confounding; and the utility of estimating the probability of a correct decision for dose selection when prior information is limited. While M&S has been widely used during the last few decades, it is expected to play an essential role as more quantitative assessments are employed in the decision-making process. By integrating M&S as a tool to compile the totality of evidence collected throughout the drug development program, more informed decisions will be made.

  3. Large Crater Clustering tool

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Skinner, James A.; Hunter, Marc A.

    2017-08-01

    In this paper we present the Large Crater Clustering (LCC) tool set, an ArcGIS plugin that supports the quantitative approximation of a primary impact location from user-identified locations of possible secondary impact craters or the long-axes of clustered secondary craters. The identification of primary impact craters directly supports planetary geologic mapping and topical science studies where the chronostratigraphic age of some geologic units may be known, but more distant features have questionable geologic ages. Previous works (e.g., McEwen et al., 2005; Dundas and McEwen, 2007) have shown that the source of secondary impact craters can be estimated from secondary impact craters. This work adapts those methods into a statistically robust tool set. We describe the four individual tools within the LCC tool set to support: (1) processing individually digitized point observations (craters), (2) estimating the directional distribution of a clustered set of craters, back projecting the potential flight paths (crater clusters or linearly approximated catenae or lineaments), (3) intersecting projected paths, and (4) intersecting back-projected trajectories to approximate the local of potential source primary craters. We present two case studies using secondary impact features mapped in two regions of Mars. We demonstrate that the tool is able to quantitatively identify primary impacts and supports the improved qualitative interpretation of potential secondary crater flight trajectories.

  4. Construction of measurement uncertainty profiles for quantitative analysis of genetically modified organisms based on interlaboratory validation data.

    PubMed

    Macarthur, Roy; Feinberg, Max; Bertheau, Yves

    2010-01-01

    A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.

  5. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  6. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  7. Lognormal Uncertainty Estimation for Failure Rates

    NASA Technical Reports Server (NTRS)

    Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.

    2017-01-01

    "Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain. Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This presentation will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  8. Some suggested future directions of quantitative resource assessments

    USGS Publications Warehouse

    Singer, D.A.

    2001-01-01

    Future quantitative assessments will be expected to estimate quantities, values, and locations of undiscovered mineral resources in a form that conveys both economic viability and uncertainty associated with the resources. Historically, declining metal prices point to the need for larger deposits over time. Sensitivity analysis demonstrates that the greatest opportunity for reducing uncertainty in assessments lies in lowering uncertainty associated with tonnage estimates. Of all errors possible in assessments, those affecting tonnage estimates are by far the most important. Selecting the correct deposit model is the most important way of controlling errors because the dominance of tonnage-deposit models are the best known predictor of tonnage. Much of the surface is covered with apparently barren rocks and sediments in many large regions. Because many exposed mineral deposits are believed to have been found, a prime concern is the presence of possible mineralized rock under cover. Assessments of areas with resources under cover must rely on extrapolation from surrounding areas, new geologic maps of rocks under cover, or analogy with other well-explored areas that can be considered training tracts. Cover has a profound effect on uncertainty and on methods and procedures of assessments because geology is seldom known and geophysical methods typically have attenuated responses. Many earlier assessment methods were based on relationships of geochemical and geophysical variables to deposits learned from deposits exposed on the surface-these will need to be relearned based on covered deposits. Mineral-deposit models are important in quantitative resource assessments for two reasons: (1) grades and tonnages of most deposit types are significantly different, and (2) deposit types are present in different geologic settings that can be identified from geologic maps. Mineral-deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Grade and tonnage models and development of quantitative descriptive, economic, and deposit density models will help reduce the uncertainty of these new assessments.

  9. Quantitative Temperature Reconstructions from Holocene and Late Glacial Lake Sediments in the Tropical Andes using Chironomidae (non-biting midges)

    NASA Astrophysics Data System (ADS)

    Matthews-Bird, F.; Gosling, W. D.; Brooks, S. J.; Montoya, E.; Coe, A. L.

    2014-12-01

    Chironomidae (non-biting midges) is a family of two-winged aquatic insects of the order Diptera. They are globally distributed and one of the most diverse families within aquatic ecosystems. The insects are stenotopic, and the rapid turnover of species and their ability to colonise quickly favourable habitats means chironomids are extremely sensitive to environmental change, notably temperature. Through the development of quantitative temperature inference models chironomids have become important palaeoecological tools. Proxies capable of generating independent estimates of past climate are crucial to disentangling climate signals and ecosystem response in the palaeoecological record. This project has developed the first modern environmental calibration data set in order to use chironomids from the Tropical Andes as quantitative climate proxies. Using surface sediments from c. 60 lakes from Bolivia, Peru and Ecuador we have developed an inference model capable of reconstructing temperatures, with a prediction error of 1-2°C, from fossil assemblages. Here we present the first Lateglacial and Holocene chironomid-inferred temperature reconstructions from two sites in the tropical Andes. The first record, from a high elevation (4153 m asl) lake in the Bolivian Andes, shows persistently cool temperatures for the past 15 kyr, punctuated by warm episodes in the early Holocene (9-10 kyr BP). The chironomid-inferred Holocene temperature trends from a lake sediment record on the eastern Andean flank of Ecuador (1248 m asl) spanning the last 5 millennia are synchronous with temperature changes in the NGRIP ice core record. The temperature estimates suggest along the eastern flank of the Andes, at lower latitudes (~1°S), climate closely resemble the well-established fluctuations of the Northern Hemisphere for this time period. Late-glacial climate fluctuations across South America are still disputed with some palaeoecological records suggesting evidence for Younger Dryas like events. Estimates from quantitative climate proxies such as chironomids will help constrain these patterns and further our understanding of climate teleconnections on Quaternary timescales.

  10. Modified HS-SPME for determination of quantitative relations between low-molecular oxygen compounds in various matrices.

    PubMed

    Dawidowicz, Andrzej L; Szewczyk, Joanna; Dybowski, Michal P

    2016-09-07

    Similar quantitative relations between individual constituents of the liquid sample established by its direct injection can be obtained applying Polydimethylsiloxane (PDMS) fiber in the headspace solid phase microextraction (HS-SPME) system containing the examined sample suspended in methyl silica oil. This paper proves that the analogous system composed of sample suspension/emulsion in polyethylene glycol (PEG) and Carbowax fiber allows to get similar quantitative relations between components of the mixture as those established by its direct analysis, but only for polar constituents. It is demonstrated for essential oil (EO) components of savory, sage, mint and thyme, and of artificial liquid mixture of polar constituents. The observed differences in quantitative relations between polar constituents estimated by both applied procedures are insignificant (Fexp < Fcrit). The presented results indicates that wider applicability of the system composed of a sample suspended in the oil of the same physicochemical character as that of used SPME fiber coating strongly depends on the character of interactions between analytes-suspending liquid and analytes-fiber coating. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    PubMed

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.

  12. Providing Quantitative Information and a Nudge to Undergo Stool Testing in a Colorectal Cancer Screening Decision Aid: A Randomized Clinical Trial.

    PubMed

    Schwartz, Peter H; Perkins, Susan M; Schmidt, Karen K; Muriello, Paul F; Althouse, Sandra; Rawl, Susan M

    2017-08-01

    Guidelines recommend that patient decision aids should provide quantitative information about probabilities of potential outcomes, but the impact of this information is unknown. Behavioral economics suggests that patients confused by quantitative information could benefit from a "nudge" towards one option. We conducted a pilot randomized trial to estimate the effect sizes of presenting quantitative information and a nudge. Primary care patients (n = 213) eligible for colorectal cancer screening viewed basic screening information and were randomized to view (a) quantitative information (quantitative module), (b) a nudge towards stool testing with the fecal immunochemical test (FIT) (nudge module), (c) neither a nor b, or (d) both a and b. Outcome measures were perceived colorectal cancer risk, screening intent, preferred test, and decision conflict, measured before and after viewing the decision aid, and screening behavior at 6 months. Patients viewing the quantitative module were more likely to be screened than those who did not ( P = 0.012). Patients viewing the nudge module had a greater increase in perceived colorectal cancer risk than those who did not ( P = 0.041). Those viewing the quantitative module had a smaller increase in perceived risk than those who did not ( P = 0.046), and the effect was moderated by numeracy. Among patients with high numeracy who did not view the nudge module, those who viewed the quantitative module had a greater increase in intent to undergo FIT ( P = 0.028) than did those who did not. The limitations of this study were the limited sample size and single healthcare system. Adding quantitative information to a decision aid increased uptake of colorectal cancer screening, while adding a nudge to undergo FIT did not increase uptake. Further research on quantitative information in decision aids is warranted.

  13. Harnessing quantitative genetics and genomics for understanding and improving complex traits in crops

    USDA-ARS?s Scientific Manuscript database

    Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...

  14. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  15. Estimating Photosynthetically Available Radiation (PAR) at the Earth's surface from satellite observations

    NASA Technical Reports Server (NTRS)

    Frouin, Robert

    1993-01-01

    Current satellite algorithms to estimate photosynthetically available radiation (PAR) at the earth' s surface are reviewed. PAR is deduced either from an insolation estimate or obtained directly from top-of-atmosphere solar radiances. The characteristics of both approaches are contrasted and typical results are presented. The inaccuracies reported, about 10 percent and 6 percent on daily and monthly time scales, respectively, are useful to model oceanic and terrestrial primary productivity. At those time scales variability due to clouds in the ratio of PAR and insolation is reduced, making it possible to deduce PAR directly from insolation climatologies (satellite or other) that are currently available or being produced. Improvements, however, are needed in conditions of broken cloudiness and over ice/snow. If not addressed properly, calibration/validation issues may prevent quantitative use of the PAR estimates in studies of climatic change. The prospects are good for an accurate, long-term climatology of PAR over the globe.

  16. 3D-quantitative structure-activity relationship studies on benzothiadiazepine hydroxamates as inhibitors of tumor necrosis factor-alpha converting enzyme.

    PubMed

    Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram

    2008-04-01

    A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.

  17. A quantitative assessment of risks of heavy metal residues in laundered shop towels and their use by workers.

    PubMed

    Connor, Kevin; Magee, Brian

    2014-10-01

    This paper presents a risk assessment of exposure to metal residues in laundered shop towels by workers. The concentrations of 27 metals measured in a synthetic sweat leachate were used to estimate the releasable quantity of metals which could be transferred to workers' skin. Worker exposure was evaluated quantitatively with an exposure model that focused on towel-to-hand transfer and subsequent hand-to-food or -mouth transfers. The exposure model was based on conservative, but reasonable assumptions regarding towel use and default exposure factor values from the published literature or regulatory guidance. Transfer coefficients were derived from studies representative of the exposures to towel users. Contact frequencies were based on assumed high-end use of shop towels, but constrained by a theoretical maximum dermal loading. The risk estimates for workers developed for all metals were below applicable regulatory risk benchmarks. The risk assessment for lead utilized the Adult Lead Model and concluded that predicted lead intakes do not constitute a significant health hazard based on potential worker exposures. Uncertainties are discussed in relation to the overall confidence in the exposure estimates developed for each exposure pathway and the likelihood that the exposure model is under- or overestimating worker exposures and risk. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Common Cause Failure Modeling in Space Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Ring, Rob; Novack, Steven D.; Britton, Paul

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFs are a set of dependent type of failures that can be caused for example by system environments, manufacturing, transportation, storage, maintenance, and assembly. Since there are many factors that contribute to CCFs, they can be reduced, but are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and dependent CCF. Because common cause failure data is limited in the aerospace industry, the Probabilistic Risk Assessment (PRA) Team at Bastion Technology Inc. is estimating CCF risk using generic data collected by the Nuclear Regulatory Commission (NRC). Consequently, common cause risk estimates based on this database, when applied to other industry applications, are highly uncertain. Therefore, it is important to account for a range of values for independent and CCF risk and to communicate the uncertainty to decision makers. There is an existing methodology for reducing CCF risk during design, which includes a checklist of 40+ factors grouped into eight categories. Using this checklist, an approach to produce a beta factor estimate is being investigated that quantitatively relates these factors. In this example, the checklist will be tailored to space launch vehicles, a quantitative approach will be described, and an example of the method will be presented.

  19. Contrast detection in fluid-saturated media with magnetic resonance poroelastography

    PubMed Central

    Perriñez, Phillip R.; Pattison, Adam J.; Kennedy, Francis E.; Weaver, John B.; Paulsen, Keith D.

    2010-01-01

    Purpose: Recent interest in the poroelastic behavior of tissues has led to the development of magnetic resonance poroelastography (MRPE) as an alternative to single-phase MR elastographic image reconstruction. In addition to the elastic parameters (i.e., Lamé’s constants) commonly associated with magnetic resonance elastography (MRE), MRPE enables estimation of the time-harmonic pore-pressure field induced by external mechanical vibration. Methods: This study presents numerical simulations that demonstrate the sensitivity of the computed displacement and pore-pressure fields to a priori estimates of the experimentally derived model parameters. In addition, experimental data collected in three poroelastic phantoms are used to assess the quantitative accuracy of MR poroelastographic imaging through comparisons with both quasistatic and dynamic mechanical tests. Results: The results indicate hydraulic conductivity to be the dominant parameter influencing the deformation behavior of poroelastic media under conditions applied during MRE. MRPE estimation of the matrix shear modulus was bracketed by the values determined from independent quasistatic and dynamic mechanical measurements as expected, whereas the contrast ratios for embedded inclusions were quantitatively similar (10%–15% difference between the reconstructed images and the mechanical tests). Conclusions: The findings suggest that the addition of hydraulic conductivity and a viscoelastic solid component as parameters in the reconstruction may be warranted. PMID:20831058

  20. Quantitative characterisation of sedimentary grains

    NASA Astrophysics Data System (ADS)

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  1. 76 FR 13018 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. Total Burden Estimate for the...

  2. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    PubMed

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.

  3. Asbestos exposure--quantitative assessment of risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, J.M.; Weill, H.

    Methods for deriving quantitative estimates of asbestos-associated health risks are reviewed and their numerous assumptions and uncertainties described. These methods involve extrapolation of risks observed at past relatively high asbestos concentration levels down to usually much lower concentration levels of interest today--in some cases, orders of magnitude lower. These models are used to calculate estimates of the potential risk to workers manufacturing asbestos products and to students enrolled in schools containing asbestos products. The potential risk to workers exposed for 40 yr to 0.5 fibers per milliliter (f/ml) of mixed asbestos fiber type (a permissible workplace exposure limit under considerationmore » by the Occupational Safety and Health Administration (OSHA) ) are estimated as 82 lifetime excess cancers per 10,000 exposed. The risk to students exposed to an average asbestos concentration of 0.001 f/ml of mixed asbestos fiber types for an average enrollment period of 6 school years is estimated as 5 lifetime excess cancers per one million exposed. If the school exposure is to chrysotile asbestos only, then the estimated risk is 1.5 lifetime excess cancers per million. Risks from other causes are presented for comparison; e.g., annual rates (per million) of 10 deaths from high school football, 14 from bicycling (10-14 yr of age), 5 to 20 for whooping cough vaccination. Decisions concerning asbestos products require participation of all parties involved and should only be made after a scientifically defensible estimate of the associated risk has been obtained. In many cases to date, such decisions have been made without adequate consideration of the level of risk or the cost-effectiveness of attempts to lower the potential risk. 73 references.« less

  4. Age-Related Differences in Susceptibility to Carcinogenesis. II. Approaches for Application and Uncertainty Analyses for Individual Genetically Acting Carcinogens

    PubMed Central

    Hattis, Dale; Goble, Robert; Chu, Margaret

    2005-01-01

    In an earlier report we developed a quantitative likelihood-based analysis of the differences in sensitivity of rodents to mutagenic carcinogens across three life stages (fetal, birth to weaning, and weaning to 60 days) relative to exposures in adult life. Here we draw implications for assessing human risks for full lifetime exposures, taking into account three types of uncertainties in making projections from the rodent data: uncertainty in the central estimates of the life-stage–specific sensitivity factors estimated earlier, uncertainty from chemical-to-chemical differences in life-stage–specific sensitivities for carcinogenesis, and uncertainty in the mapping of rodent life stages to human ages/exposure periods. Among the uncertainties analyzed, the mapping of rodent life stages to human ages/exposure periods is most important quantitatively (a range of several-fold in estimates of the duration of the human equivalent of the highest sensitivity “birth to weaning” period in rodents). The combined effects of these uncertainties are estimated with Monte Carlo analyses. Overall, the estimated population arithmetic mean risk from lifetime exposures at a constant milligrams per kilogram body weight level to a generic mutagenic carcinogen is about 2.8-fold larger than expected from adult-only exposure with 5–95% confidence limits of 1.5-to 6-fold. The mean estimates for the 0- to 2-year and 2- to 15-year periods are about 35–55% larger than the 10- and 3-fold sensitivity factor adjustments recently proposed by the U.S. Environmental Protection Agency. The present results are based on data for only nine chemicals, including five mutagens. Risk inferences will be altered as data become available for other chemicals. PMID:15811844

  5. Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits

    PubMed Central

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L.

    2013-01-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays. PMID:23737753

  6. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    PubMed

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  7. Estimates of Radiation Effects on Cancer Risks in the Mayak Worker, Techa River and Atomic Bomb Survivor Studies.

    PubMed

    Preston, Dale L; Sokolnikov, Mikhail E; Krestinina, Lyudmila Yu; Stram, Daniel O

    2017-04-01

    For almost 50 y, the Life Span Study cohort of atomic bomb survivor studies has been the primary source of the quantitative estimates of cancer and non-cancer risks that form the basis of international radiation protection standards. However, the long-term follow-up and extensive individual dose reconstruction for the Russian Mayak worker cohort (MWC) and Techa River cohort (TRC) are providing quantitative information about radiation effects on cancer risks that complement the atomic bomb survivor-based risk estimates. The MWC, which includes ~26 000 men and women who began working at Mayak between 1948 and 1982, is the primary source for estimates of the effects of plutonium on cancer risks and also provides information on the effects of low-dose rate external gamma exposures. The TRC consists of ~30 000 men and women of all ages who received low-dose-rate, low-dose exposures as a consequence of Mayak's release of radioactive material into the Techa River. The TRC data are of interest because the exposures are broadly similar to those experienced by populations exposed as a consequence of nuclear accidents such as Chernobyl. In this presentation, it is described the strengths and limitations of these three cohorts, outline and compare recent solid cancer and leukemia risk estimates and discussed why information from the Mayak and Techa River studies might play a role in the development and refinement of the radiation risk estimates that form the basis for radiation protection standards. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Average effect estimates remain similar as evidence evolves from single trials to high-quality bodies of evidence: a meta-epidemiologic study.

    PubMed

    Gartlehner, Gerald; Dobrescu, Andreea; Evans, Tammeka Swinson; Thaler, Kylie; Nussbaumer, Barbara; Sommer, Isolde; Lohr, Kathleen N

    2016-01-01

    The objective of our study was to use a diverse sample of medical interventions to assess empirically whether first trials rendered substantially different treatment effect estimates than reliable, high-quality bodies of evidence. We used a meta-epidemiologic study design using 100 randomly selected bodies of evidence from Cochrane reports that had been graded as high quality of evidence. To determine the concordance of effect estimates between first and subsequent trials, we applied both quantitative and qualitative approaches. For quantitative assessment, we used Lin's concordance correlation and calculated z-scores; to determine the magnitude of differences of treatment effects, we calculated standardized mean differences (SMDs) and ratios of relative risks. We determined qualitative concordance based on a two-tiered approach incorporating changes in statistical significance and magnitude of effect. First trials both overestimated and underestimated the true treatment effects in no discernible pattern. Nevertheless, depending on the definition of concordance, effect estimates of first trials were concordant with pooled subsequent studies in at least 33% but up to 50% of comparisons. The pooled magnitude of change as bodies of evidence advanced from single trials to high-quality bodies of evidence was 0.16 SMD [95% confidence interval (CI): 0.12, 0.21]. In 80% of comparisons, the difference in effect estimates was smaller than 0.5 SMDs. In first trials with large treatment effects (>0.5 SMD), however, estimates of effect substantially changed as new evidence accrued (mean change 0.68 SMD; 95% CI: 0.50, 0.86). Results of first trials often change, but the magnitude of change, on average, is small. Exceptions are first trials that present large treatment effects, which often dissipate as new evidence accrues. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Measurement of lung expansion with computed tomography and comparison with quantitative histology.

    PubMed

    Coxson, H O; Mayo, J R; Behzad, H; Moore, B J; Verburgt, L M; Staples, C A; Paré, P D; Hogg, J C

    1995-11-01

    The total and regional lung volumes were estimated from computed tomography (CT), and the pleural pressure gradient was determined by using the milliliters of gas per gram of tissue estimated from the X-ray attenuation values and the pressure-volume curve of the lung. The data show that CT accurately estimated the volume of the resected lobe but overestimated its weight by 24 +/- 19%. The volume of gas per gram of tissue was less in the gravity-dependent regions due to a pleural pressure gradient of 0.24 +/- 0.08 cmH2O/cm of descent in the thorax. The proportion of tissue to air obtained with CT was similar to that obtained by quantitative histology. We conclude that the CT scan can be used to estimate total and regional lung volumes and that measurements of the proportions of tissue and air within the thorax by CT can be used in conjunction with quantitative histology to evaluate lung structure.

  10. Long-term changes in lower tropospheric baseline ozone concentrations: Comparing chemistry-climate models and observations at northern midlatitudes

    NASA Astrophysics Data System (ADS)

    Parrish, D. D.; Lamarque, J.-F.; Naik, V.; Horowitz, L.; Shindell, D. T.; Staehelin, J.; Derwent, R.; Cooper, O. R.; Tanimoto, H.; Volz-Thomas, A.; Gilge, S.; Scheel, H.-E.; Steinbacher, M.; Fröhlich, M.

    2014-05-01

    Two recent papers have quantified long-term ozone (O3) changes observed at northern midlatitude sites that are believed to represent baseline (here understood as representative of continental to hemispheric scales) conditions. Three chemistry-climate models (NCAR CAM-chem, GFDL-CM3, and GISS-E2-R) have calculated retrospective tropospheric O3 concentrations as part of the Atmospheric Chemistry and Climate Model Intercomparison Project and Coupled Model Intercomparison Project Phase 5 model intercomparisons. We present an approach for quantitative comparisons of model results with measurements for seasonally averaged O3 concentrations. There is considerable qualitative agreement between the measurements and the models, but there are also substantial and consistent quantitative disagreements. Most notably, models (1) overestimate absolute O3 mixing ratios, on average by 5 to 17 ppbv in the year 2000, (2) capture only 50% of O3 changes observed over the past five to six decades, and little of observed seasonal differences, and (3) capture 25 to 45% of the rate of change of the long-term changes. These disagreements are significant enough to indicate that only limited confidence can be placed on estimates of present-day radiative forcing of tropospheric O3 derived from modeled historic concentration changes and on predicted future O3 concentrations. Evidently our understanding of tropospheric O3, or the incorporation of chemistry and transport processes into current chemical climate models, is incomplete. Modeled O3 trends approximately parallel estimated trends in anthropogenic emissions of NOx, an important O3 precursor, while measured O3 changes increase more rapidly than these emission estimates.

  11. Time-of-flight PET time calibration using data consistency

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2018-05-01

    This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.

  12. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  13. Quantitative estimation of pesticide-likeness for agrochemical discovery.

    PubMed

    Avram, Sorin; Funar-Timofei, Simona; Borota, Ana; Chennamaneni, Sridhar Rao; Manchala, Anil Kumar; Muresan, Sorel

    2014-12-01

    The design of chemical libraries, an early step in agrochemical discovery programs, is frequently addressed by means of qualitative physicochemical and/or topological rule-based methods. The aim of this study is to develop quantitative estimates of herbicide- (QEH), insecticide- (QEI), fungicide- (QEF), and, finally, pesticide-likeness (QEP). In the assessment of these definitions, we relied on the concept of desirability functions. We found a simple function, shared by the three classes of pesticides, parameterized particularly, for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings. Subsequently, we describe the scoring of each pesticide class by the corresponding quantitative estimate. In a comparative study, we assessed the performance of the scoring functions using extensive datasets of patented pesticides. The hereby-established quantitative assessment has the ability to rank compounds whether they fail well-established pesticide-likeness rules or not, and offer an efficient way to prioritize (class-specific) pesticides. These findings are valuable for the efficient estimation of pesticide-likeness of vast chemical libraries in the field of agrochemical discovery. Graphical AbstractQuantitative models for pesticide-likeness were derived using the concept of desirability functions parameterized for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings.

  14. Estimation of the parameters of disturbances on long-range radio-communication paths

    NASA Astrophysics Data System (ADS)

    Gerasimov, Iu. S.; Gordeev, V. A.; Kristal, V. S.

    1982-09-01

    Radio propagation on long-range paths is disturbed by such phenomena as ionospheric density fluctuations, meteor trails, and the Faraday effect. In the present paper, the determination of the characteristics of such disturbances on the basis of received-signal parameters is considered as an inverse and ill-posed problem. A method for investigating the indeterminacy which arises in such determinations is proposed, and a quantitative analysis of this indeterminacy is made.

  15. Effect of Stress State on Fracture Features

    NASA Astrophysics Data System (ADS)

    Das, Arpan

    2018-02-01

    Present article comprehensively explores the influence of specimen thickness on the quantitative estimates of different ductile fractographic features in two dimensions, correlating tensile properties of a reactor pressure vessel steel tested under ambient temperature where the initial crystallographic texture, inclusion content, and their distribution are kept unaltered. It has been investigated that the changes in tensile fracture morphology of these steels are directly attributable to the resulting stress-state history under tension for given specimen dimensions.

  16. Model of Market Share Affected by Social Media Reputation

    NASA Astrophysics Data System (ADS)

    Ishii, Akira; Kawahata, Yasuko; Goto, Ujo

    Proposal of market theory to put the effect of social media into account is presented in this paper. The standard market share model in economics is employed as a market theory and the effect of social media is considered quantitatively using the mathematical model for hit phenomena. Using this model, we can estimate the effect of social media in market share as a simple market model simulation using our proposed method.

  17. Quantitative methods for estimating the anisotropy of the strength properties and the phase composition of Mg-Al alloys

    NASA Astrophysics Data System (ADS)

    Betsofen, S. Ya.; Kolobov, Yu. R.; Volkova, E. F.; Bozhko, S. A.; Voskresenskaya, I. I.

    2015-04-01

    Quantitative methods have been developed to estimate the anisotropy of the strength properties and to determine the phase composition of Mg-Al alloys. The efficiency of the methods is confirmed for MA5 alloy subjected to severe plastic deformation. It is shown that the Taylor factors calculated for basal slip averaged over all orientations of a polycrystalline aggregate with allowance for texture can be used for a quantitative estimation of the contribution of the texture of semifinished magnesium alloy products to the anisotropy of their strength properties. A technique of determining the composition of a solid solution and the intermetallic phase Al12Mg17 content is developed using the measurement of the lattice parameters of the solid solution and the known dependence of these lattice parameters on the composition.

  18. Correcting power and p-value calculations for bias in diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Landman, Bennett A

    2013-07-01

    Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. An improved level set method for brain MR images segmentation and bias correction.

    PubMed

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  20. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  1. Quantitative Aging Pattern in Mouse Urine Vapor as Measured by Gas-Liquid Chromatography

    NASA Technical Reports Server (NTRS)

    Robinson, Arthur B.; Dirren, Henri; Sheets, Alan; Miquel, Jaime; Lundgren, Paul R.

    1975-01-01

    We have discovered a quantitative aging pattern in mouse urine vapor. The diagnostic power of the pattern has been found to be high. We hope that this pattern will eventually allow quantitative estimates of physiological age and some insight into the biochemistry of aging.

  2. Genomic Quantitative Genetics to Study Evolution in the Wild.

    PubMed

    Gienapp, Phillip; Fior, Simone; Guillaume, Frédéric; Lasky, Jesse R; Sork, Victoria L; Csilléry, Katalin

    2017-12-01

    Quantitative genetic theory provides a means of estimating the evolutionary potential of natural populations. However, this approach was previously only feasible in systems where the genetic relatedness between individuals could be inferred from pedigrees or experimental crosses. The genomic revolution opened up the possibility of obtaining the realized proportion of genome shared among individuals in natural populations of virtually any species, which could promise (more) accurate estimates of quantitative genetic parameters in virtually any species. Such a 'genomic' quantitative genetics approach relies on fewer assumptions, offers a greater methodological flexibility, and is thus expected to greatly enhance our understanding of evolution in natural populations, for example, in the context of adaptation to environmental change, eco-evolutionary dynamics, and biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Food Consumption and Handling Survey for Quantitative Microbiological Consumer Phase Risk Assessments.

    PubMed

    Chardon, Jurgen; Swart, Arno

    2016-07-01

    In the consumer phase of a typical quantitative microbiological risk assessment (QMRA), mathematical equations identify data gaps. To acquire useful data we designed a food consumption and food handling survey (2,226 respondents) for QMRA applications that is especially aimed at obtaining quantitative data. For a broad spectrum of food products, the survey covered the following topics: processing status at retail, consumer storage, preparation, and consumption. Questions were designed to facilitate distribution fitting. In the statistical analysis, special attention was given to the selection of the most adequate distribution to describe the data. Bootstrap procedures were used to describe uncertainty. The final result was a coherent quantitative consumer phase food survey and parameter estimates for food handling and consumption practices in The Netherlands, including variation over individuals and uncertainty estimates.

  4. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    NASA Technical Reports Server (NTRS)

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.

    2010-01-01

    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, within a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability, and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation, testing results, and other information. Where appropriate, actual performance history was used to calculate failure rates for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to assess compliance with requirements and to highlight design or performance shortcomings for further decision making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability, and maintainability analysis, and present findings and observation based on analysis leading to the Ground Operations Project Preliminary Design Review milestone.

  5. Multi-model approach to petroleum resource appraisal using analytic methodologies for probabilistic systems

    USGS Publications Warehouse

    Crovelli, R.A.

    1988-01-01

    The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.

  6. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography

    PubMed Central

    Zamir, Ehud; Kong, George Y.X.; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-01-01

    Purpose We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Methods Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance (“E”) and (2) lateral photographic temporal limbus to cornea distance (“Z”). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. Results A strong linear correlation was found between EZR and ACD, R = −0.91, R2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was −0.013 mm (range −0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Conclusions Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. Translational Relevance EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations. PMID:27540496

  7. Global estimates of country health indicators: useful, unnecessary, inevitable?

    PubMed Central

    AbouZahr, Carla; Boerma, Ties; Hogan, Daniel

    2017-01-01

    ABSTRACT Background: The MDG era relied on global health estimates to fill data gaps and ensure temporal and cross-country comparability in reporting progress. Monitoring the Sustainable Development Goals will present new challenges, requiring enhanced capacities to generate, analyse, interpret and use country produced data. Objective: To summarize the development of global health estimates and discuss their utility and limitations from global and country perspectives. Design: Descriptive paper based on findings of intercountry workshops, reviews of literatureon and synthesis of experiences. Results: Producers of global health estimates focus on the technical soundness of estimation methods and comparability of the results across countries and over time. By contrast, country users are more concerned about the extent of their involvement in the estimation process and hesitate to buy into estimates derived using methods their technical staff cannot explain and that differ from national data sources. Quantitative summaries of uncertainty may be of limited practical use in policy discussions where decisions need to be made about what to do next. Conclusions: Greater transparency and involvement of country partners in the development of global estimates will help improve ownership, strengthen country capacities for data production and use, and reduce reliance on externally produced estimates. PMID:28532307

  8. Linearization improves the repeatability of quantitative dynamic contrast-enhanced MRI.

    PubMed

    Jones, Kyle M; Pagel, Mark D; Cárdenas-Rodríguez, Julio

    2018-04-01

    The purpose of this study was to compare the repeatabilities of the linear and nonlinear Tofts and reference region models (RRM) for dynamic contrast-enhanced MRI (DCE-MRI). Simulated and experimental DCE-MRI data from 12 rats with a flank tumor of C6 glioma acquired over three consecutive days were analyzed using four quantitative and semi-quantitative DCE-MRI metrics. The quantitative methods used were: 1) linear Tofts model (LTM), 2) non-linear Tofts model (NTM), 3) linear RRM (LRRM), and 4) non-linear RRM (NRRM). The following semi-quantitative metrics were used: 1) maximum enhancement ratio (MER), 2) time to peak (TTP), 3) initial area under the curve (iauc64), and 4) slope. LTM and NTM were used to estimate K trans , while LRRM and NRRM were used to estimate K trans relative to muscle (R Ktrans ). Repeatability was assessed by calculating the within-subject coefficient of variation (wSCV) and the percent intra-subject variation (iSV) determined with the Gage R&R analysis. The iSV for R Ktrans using LRRM was two-fold lower compared to NRRM at all simulated and experimental conditions. A similar trend was observed for the Tofts model, where LTM was at least 50% more repeatable than the NTM under all experimental and simulated conditions. The semi-quantitative metrics iauc64 and MER were as equally repeatable as K trans and R Ktrans estimated by LTM and LRRM respectively. The iSV for iauc64 and MER were significantly lower than the iSV for slope and TTP. In simulations and experimental results, linearization improves the repeatability of quantitative DCE-MRI by at least 30%, making it as repeatable as semi-quantitative metrics. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Quantitative Estimates of the Social Benefits of Learning, 1: Crime. Wider Benefits of Learning Research Report.

    ERIC Educational Resources Information Center

    Feinstein, Leon

    The cost benefits of lifelong learning in the United Kingdom were estimated, based on quantitative evidence. Between 1975-1996, 43 police force areas in England and Wales were studied to determine the effect of wages on crime. It was found that a 10 percent rise in the average pay of those on low pay reduces the overall area property crime rate by…

  10. Generalized paired-agent kinetic model for in vivo quantification of cancer cell-surface receptors under receptor saturation conditions

    NASA Astrophysics Data System (ADS)

    Sadeghipour, N.; Davis, S. C.; Tichauer, K. M.

    2017-01-01

    New precision medicine drugs oftentimes act through binding to specific cell-surface cancer receptors, and thus their efficacy is highly dependent on the availability of those receptors and the receptor concentration per cell. Paired-agent molecular imaging can provide quantitative information on receptor status in vivo, especially in tumor tissue; however, to date, published approaches to paired-agent quantitative imaging require that only ‘trace’ levels of imaging agent exist compared to receptor concentration. This strict requirement may limit applicability, particularly in drug binding studies, which seek to report on a biological effect in response to saturating receptors with a drug moiety. To extend the regime over which paired-agent imaging may be used, this work presents a generalized simplified reference tissue model (GSRTM) for paired-agent imaging developed to approximate receptor concentration in both non-receptor-saturated and receptor-saturated conditions. Extensive simulation studies show that tumor receptor concentration estimates recovered using the GSRTM are more accurate in receptor-saturation conditions than the standard simple reference tissue model (SRTM) (% error (mean  ±  sd): GSRTM 0  ±  1 and SRTM 50  ±  1) and match the SRTM accuracy in non-saturated conditions (% error (mean  ±  sd): GSRTM 5  ±  5 and SRTM 0  ±  5). To further test the approach, GSRTM-estimated receptor concentration was compared to SRTM-estimated values extracted from tumor xenograft in vivo mouse model data. The GSRTM estimates were observed to deviate from the SRTM in tumors with low receptor saturation (which are likely in a saturated regime). Finally, a general ‘rule-of-thumb’ algorithm is presented to estimate the expected level of receptor saturation that would be achieved in a given tissue provided dose and pharmacokinetic information about the drug or imaging agent being used, and physiological information about the tissue. These studies suggest that the GSRTM is necessary when receptor saturation exceeds 20% and highlight the potential for GSRTM to accurately measure receptor concentrations under saturation conditions, such as might be required during high dose drug studies, or for imaging applications where high concentrations of imaging agent are required to optimize signal-to-noise conditions. This model can also be applied to PET and SPECT imaging studies that tend to suffer from noisier data, but require one less parameter to fit if images are converted to imaging agent concentration (quantitative PET/SPECT).

  11. NEXRAD quantitative precipitation estimates, data acquisition, and processing for the DuPage County, Illinois, streamflow-simulation modeling system

    USGS Publications Warehouse

    Ortel, Terry W.; Spies, Ryan R.

    2015-11-19

    Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).

  12. Rapid Quantitation of Ascorbic and Folic Acids in SRM 3280 Multivitamin/Multielement Tablets using Flow-Injection Tandem Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhandari, Deepak; Kertesz, Vilmos; Van Berkel, Gary J

    RATIONALE: Ascorbic acid (AA) and folic acid (FA) are water-soluble vitamins and are usually fortified in food and dietary supplements. For the safety of human health, proper intake of these vitamins is recommended. Improvement in the analysis time required for the quantitative determination of these vitamins in food and nutritional formulations is desired. METHODS: A simple and fast (~5 min) in-tube sample preparation was performed, independently for FA and AA, by mixing extraction solvent with a powdered sample aliquot followed by agitation, centrifugation, and filtration to recover an extract for analysis. Quantitative detection was achieved by flow-injection (1 L injectionmore » volume) electrospray ionization tandem mass spectrometry (ESI-MS/MS) in negative ion mode using the method of standard addition. RESULTS: Method of standard addition was employed for the quantitative estimation of each vitamin in a sample extract. At least 2 spiked and 1 non-spiked sample extract were injected in triplicate for each quantitative analysis. Given an injection-to-injection interval of approximately 2 min, about 18 min was required to complete the quantitative estimation of each vitamin. The concentration values obtained for the respective vitamins in the standard reference material (SRM) 3280 using this approach were within the statistical range of the certified values provided in the NIST Certificate of Analysis. The estimated limit of detections of FA and AA were 13 and 5.9 ng/g, respectively. CONCLUSIONS: Flow-injection ESI-MS/MS was successfully applied for the rapid quantitation of FA and AA in SRM 3280 multivitamin/multielement tablets.« less

  13. Missing Value Monitoring Enhances the Robustness in Proteomics Quantitation.

    PubMed

    Matafora, Vittoria; Corno, Andrea; Ciliberto, Andrea; Bachi, Angela

    2017-04-07

    In global proteomic analysis, it is estimated that proteins span from millions to less than 100 copies per cell. The challenge of protein quantitation by classic shotgun proteomic techniques relies on the presence of missing values in peptides belonging to low-abundance proteins that lowers intraruns reproducibility affecting postdata statistical analysis. Here, we present a new analytical workflow MvM (missing value monitoring) able to recover quantitation of missing values generated by shotgun analysis. In particular, we used confident data-dependent acquisition (DDA) quantitation only for proteins measured in all the runs, while we filled the missing values with data-independent acquisition analysis using the library previously generated in DDA. We analyzed cell cycle regulated proteins, as they are low abundance proteins with highly dynamic expression levels. Indeed, we found that cell cycle related proteins are the major components of the missing values-rich proteome. Using the MvM workflow, we doubled the number of robustly quantified cell cycle related proteins, and we reduced the number of missing values achieving robust quantitation for proteins over ∼50 molecules per cell. MvM allows lower quantification variance among replicates for low abundance proteins with respect to DDA analysis, which demonstrates the potential of this novel workflow to measure low abundance, dynamically regulated proteins.

  14. Quantitative analysis of Al-Si alloy using calibration free laser induced breakdown spectroscopy (CF-LIBS)

    NASA Astrophysics Data System (ADS)

    Shakeel, Hira; Haq, S. U.; Aisha, Ghulam; Nadeem, Ali

    2017-06-01

    The quantitative analysis of the standard aluminum-silicon alloy has been performed using calibration free laser induced breakdown spectroscopy (CF-LIBS). The plasma was produced using the fundamental harmonic (1064 nm) of the Nd: YAG laser and the emission spectra were recorded at 3.5 μs detector gate delay. The qualitative analysis of the emission spectra confirms the presence of Mg, Al, Si, Ti, Mn, Fe, Ni, Cu, Zn, Sn, and Pb in the alloy. The background subtracted and self-absorption corrected emission spectra were used for the estimation of plasma temperature as 10 100 ± 300 K. The plasma temperature and self-absorption corrected emission lines of each element have been used for the determination of concentration of each species present in the alloy. The use of corrected emission intensities and accurate evaluation of plasma temperature yield reliable quantitative analysis up to a maximum 2.2% deviation from reference sample concentration.

  15. Fitness to work of astronauts in conditions of action of the extreme emotional factors

    NASA Astrophysics Data System (ADS)

    Prisniakova, L. M.

    2004-01-01

    The theoretical model for the quantitative determination of influence of a level of emotional exertion on the success of human activity is presented. The learning curves of fixed words in the groups with a different level of the emotional exertion are analyzed. The obtained magnitudes of time constant T depending on a type of the emotional exertion are a quantitative measure of the emotional exertion. Time constants could also be of use for a prediction of the characteristic of fitness to work of an astronaut in conditions of extreme factors. The inverse of the sign of influencing on efficiency of activity of the man is detected. The paper offers a mathematical model of the relation between successful activity and motivations or the emotional exertion (Yerkes-Dodson law). Proposed models can serve by the theoretical basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of their selection.

  16. Dynamic inundation mapping of Hurricane Harvey flooding in the Houston metro area using hyper-resolution modeling and quantitative image reanalysis

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Lee, J. H.; Lee, S.; Zhang, Y.; Seo, D. J.

    2017-12-01

    Hurricane Harvey was one of the most extreme weather events in Texas history and left significant damages in the Houston and adjoining coastal areas. To understand better the relative impact to urban flooding of extreme amount and spatial extent of rainfall, unique geography, land use and storm surge, high-resolution water modeling is necessary such that natural and man-made components are fully resolved. In this presentation, we reconstruct spatiotemporal evolution of inundation during Hurricane Harvey using hyper-resolution modeling and quantitative image reanalysis. The two-dimensional urban flood model used is based on dynamic wave approximation and 10 m-resolution terrain data, and is forced by the radar-based multisensor quantitative precipitation estimates. The model domain includes Buffalo, Brays, Greens and White Oak Bayous in Houston. The model is simulated using hybrid parallel computing. To evaluate dynamic inundation mapping, we combine various qualitative crowdsourced images and video footages with LiDAR-based terrain data.

  17. Fitness to work of astronauts in conditions of action of the extreme emotional factors.

    PubMed

    Prisniakova, L M

    2004-01-01

    The theoretical model for the quantitative determination of influence of a level of emotional exertion on the success of human activity is presented. The learning curves of fixed words in the groups with a different level of the emotional exertion are analyzed. The obtained magnitudes of time constant T depending on a type of the emotional exertion are a quantitative measure of the emotional exertion. Time constants could also be of use for a prediction of the characteristic of fitness to work of an astronaut in conditions of extreme factors. The inverse of the sign of influencing on efficiency of activity of the man is detected. The paper offers a mathematical model of the relation between successful activity and motivations or the emotional exertion (Yerkes-Dodson law). Proposed models can serve by the theoretical basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of their selection. Published by Elsevier Ltd on behalf of COSPAR.

  18. Dynamical stochastic processes of returns in financial markets

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Yoon, Seong-Min; Jung, Jae-Won; Kim, Kyungsik

    2007-03-01

    We study the evolution of probability distribution functions of returns, from the tick data of the Korean treasury bond (KTB) futures and the S&P 500 stock index, which can be described by means of the Fokker-Planck equation. We show that the Fokker-Planck equation and the Langevin equation from the estimated Kramers-Moyal coefficients can be estimated directly from the empirical data. By analyzing the statistics of the returns, we present quantitatively the deterministic and random influences on financial time series for both markets, for which we can give a simple physical interpretation. We particularly focus on the diffusion coefficient, which may be important for the creation of a portfolio.

  19. SAS program for quantitative stratigraphic correlation by principal components

    USGS Publications Warehouse

    Hohn, M.E.

    1985-01-01

    A SAS program is presented which constructs a composite section of stratigraphic events through principal components analysis. The variables in the analysis are stratigraphic sections and the observational units are range limits of taxa. The program standardizes data in each section, extracts eigenvectors, estimates missing range limits, and computes the composite section from scores of events on the first principal component. Provided is an option of several types of diagnostic plots; these help one to determine conservative range limits or unrealistic estimates of missing values. Inspection of the graphs and eigenvalues allow one to evaluate goodness of fit between the composite and measured data. The program is extended easily to the creation of a rank-order composite. ?? 1985.

  20. Sandstone copper assessment of the Teniz Basin, Kazakhstan: Chapter R in Global mineral resource assessment

    USGS Publications Warehouse

    Cossette, Pamela M.; Bookstrom, Arthur A.; Hayes, Timothy S.; Robinson, Gilpin R.; Wallis, John C.; Zientek, Michael L.

    2014-01-01

    A quantitative mineral resource assessment has been completed that (1) delineates one 49,714 km2 tract permissive for undiscovered, sandstone subtype, sediment-hosted stratabound copper deposits, and (2) provides probabilistic estimates of numbers of undiscovered deposits and probable amounts of copper resource contained in those deposits. The permissive tract delineated in this assessment encompasses no previously known sandstone subtype, sediment-hosted stratabound copper deposits. However, this assessment estimates (with 30 percent probability) that a mean of nine undiscovered sandstone subtype copper deposits may be present in the Teniz Basin and could contain a mean total of 8.9 million metric tons of copper and 7,500 metric tons of silver.

  1. Influence of thermal anisotropy on best-fit estimates of shock normals

    NASA Technical Reports Server (NTRS)

    Lepping, R. P.

    1971-01-01

    The influence of thermal anisotropy on the estimates of interplanetary shock parameters and the associated normals is discussed. A practical theorem is presented for quantitatively correcting for anisotropic effects by weighting the before and after magnetic fields by the same anisotropy parameter h. The quantity h depends only on the thermal anisotropies before and after the shock and on the angles between the magnetic fields and the shock normal. The theorem can be applied to most slow shocks, but in those cases h usually should be lower, and sometimes markedly lower, than unity. For the extreme values of h, little change results in the shock parameters or in the shock normal.

  2. Atmospheric Effects of Subsonic Aircraft: Interim Assessment Report of the Advanced Subsonic Technology Program

    NASA Technical Reports Server (NTRS)

    Friedl, Randall R. (Editor)

    1997-01-01

    This first interim assessment of the subsonic assessment (SASS) project attempts to summarize concisely the status of our knowledge concerning the impacts of present and future subsonic aircraft fleets. It also highlights the major areas of scientific uncertainty, through review of existing data bases and model-based sensitivity studies. In view of the need for substantial improvements in both model formulations and experimental databases, this interim assessment cannot provide confident numerical predictions of aviation impacts. However, a number of quantitative estimates are presented, which provide some guidance to policy makers.

  3. Downwind hazard calculations for space shuttle launches at Kennedy Space Center and Vandenberg Air Force Base

    NASA Technical Reports Server (NTRS)

    Susko, M.; Hill, C. K.; Kaufman, J. W.

    1974-01-01

    The quantitative estimates are presented of pollutant concentrations associated with the emission of the major combustion products (HCl, CO, and Al2O3) to the lower atmosphere during normal launches of the space shuttle. The NASA/MSFC Multilayer Diffusion Model was used to obtain these calculations. Results are presented for nine sets of typical meteorological conditions at Kennedy Space Center, including fall, spring, and a sea-breeze condition, and six sets at Vandenberg AFB. In none of the selected typical meteorological regimes studied was a 10-min limit of 4 ppm exceeded.

  4. Acquisition and extinction in autoshaping.

    PubMed

    Kakade, Sham; Dayan, Peter

    2002-07-01

    C. R. Gallistel and J. Gibbon (2000) presented quantitative data on the speed with which animals acquire behavioral responses during autoshaping, together with a statistical model of learning intended to account for them. Although this model captures the form of the dependencies among critical variables, its detailed predictions are substantially at variance with the data. In the present article, further key data on the speed of acquisition are used to motivate an alternative model of learning, in which animals can be interpreted as paying different amounts of attention to stimuli according to estimates of their differential reliabilities as predictors.

  5. A method to characterize the roughness of 2-D line features: recrystallization boundaries.

    PubMed

    Sun, J; Zhang, Y B; Dahl, A B; Conradsen, K; Juul Jensen, D

    2017-03-01

    A method is presented, which allows quantification of the roughness of nonplanar boundaries of objects for which the neutral plane is not known. The method provides quantitative descriptions of both the local and global characteristics. How the method can be used to estimate the sizes of rough features and local curvatures is also presented. The potential of the method is illustrated by quantification of the roughness of two recrystallization boundaries in a pure Al specimen characterized by scanning electron microscopy. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  6. Quantitative analysis of 18F-NaF dynamic PET/CT cannot differentiate malignant from benign lesions in multiple myeloma

    PubMed Central

    Sachpekidis, Christos; Hillengass, Jens; Goldschmidt, Hartmut; Anwar, Hoda; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-01-01

    A renewed interest has been recently developed for the highly sensitive bone-seeking radiopharmaceutical 18F-NaF. Aim of the present study is to evaluate the potential utility of quantitative analysis of 18F-NaF dynamic PET/CT data in differentiating malignant from benign degenerative lesions in multiple myeloma (MM). 80 MM patients underwent whole-body PET/CT and dynamic PET/CT scanning of the pelvis with 18F-NaF. PET/CT data evaluation was based on visual (qualitative) assessment, semi-quantitative (SUV) calculations, and absolute quantitative estimations after application of a 2-tissue compartment model and a non-compartmental approach leading to the extraction of fractal dimension (FD). In total 263 MM lesions were demonstrated on 18F-NaF PET/CT. Semi-quantitative and quantitative evaluations were performed for 25 MM lesions as well as for 25 benign, degenerative and traumatic lesions. Mean SUVaverage for MM lesions was 11.9 and mean SUVmax was 23.2. Respectively, SUVaverage and SUVmax for degenerative lesions were 13.5 and 20.2. Kinetic analysis of 18F-NaF revealed the following mean values for MM lesions: K1 = 0.248 (1/min), k3 = 0.359 (1/min), influx (Ki) = 0.107 (1/min), FD = 1.382, while the respective values for degenerative lesions were: K1 = 0.169 (1/min), k3 = 0.422 (1/min), influx (Ki) = 0.095 (1/min), FD = 1. 411. No statistically significant differences between MM and benign degenerative disease regarding SUVaverage, SUVmax, K1, k3 and influx (Ki) were demonstrated. FD was significantly higher in degenerative than in malignant lesions. The present findings show that quantitative analysis of 18F-NaF PET data cannot differentiate malignant from benign degenerative lesions in MM patients, supporting previously published results, which reflect the limited role of 18F-NaF PET/CT in the diagnostic workup of MM. PMID:28913153

  7. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    PubMed

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.

  8. Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  9. Voronovskaja's theorem revisited

    NASA Astrophysics Data System (ADS)

    Tachev, Gancho T.

    2008-07-01

    We represent a new quantitative variant of Voronovskaja's theorem for Bernstein operator. This estimate improves the recent quantitative versions of Voronovskaja's theorem for certain Bernstein-type operators, obtained by H. Gonska, P. Pitul and I. Rasa in 2006.

  10. A novel approach to molecular similarity

    NASA Astrophysics Data System (ADS)

    Cooper, David L.; Allan, Neil L.

    1989-09-01

    We review briefly the general problem of assessing the similarity between one molecule and another. We propose a novel approach to the quantitative estimation of the similarity of two electron distributions. The procedure is based on momentum space concepts, and avoids many of the difficulties associated with the usual position space definitions. Results are presented for the model systems CH3CH2CH3, CH3OCH3, CH3SCH3, H2O and H2S.

  11. Noninvasive oxygen partial pressure measurement of human body fluids in vivo using magnetic resonance imaging.

    PubMed

    Zaharchuk, Greg; Busse, Reed F; Rosenthal, Guy; Manley, Geoffery T; Glenn, Orit A; Dillon, William P

    2006-08-01

    The oxygen partial pressure (pO2) of human body fluids reflects the oxygenation status of surrounding tissues. All existing fluid pO2 measurements are invasive, requiring either microelectrode/optode placement or fluid removal. The purpose of this study is to develop a noninvasive magnetic resonance imaging method to measure the pO2 of human body fluids. We developed an imaging paradigm that exploits the paramagnetism of molecular oxygen to create quantitative images of fluid oxygenation. A single-shot fast spin echo pulse sequence was modified to minimize artifacts from motion, fluid flow, and partial volume. Longitudinal relaxation rate (R1 = 1/T1) was measured with a time-efficient nonequilibrium saturation recovery method and correlated with pO2 measured in phantoms. pO2 images of human and fetal cerebrospinal fluid, bladder urine, and vitreous humor are presented and quantitative oxygenation levels are compared with prior literature estimates, where available. Significant pO2 increases are shown in cerebrospinal fluid and vitreous following 100% oxygen inhalation. Potential errors due to temperature, fluid flow, and partial volume are discussed. Noninvasive measurements of human body fluid pO2 in vivo are presented, which yield reasonable values based on prior literature estimates. This rapid imaging-based measurement of fluid oxygenation may provide insight into normal physiology as well as changes due to disease or during treatment.

  12. Use of Dual Polarization Radar in Validation of Satellite Precipitation Measurements: Rationale and Opportunities

    NASA Technical Reports Server (NTRS)

    Chandrasekar, V.; Hou, Arthur; Smith, Eric; Bringi, V. N.; Rutledge, S. A.; Gorgucci, E.; Petersen, W. A.; SkofronickJackson, Gail

    2008-01-01

    Dual-polarization weather radars have evolved significantly in the last three decades culminating in the operational deployment by the National Weather Service. In addition to operational applications in the weather service, dual-polarization radars have shown significant potential in contributing to the research fields of ground based remote sensing of rainfall microphysics, study of precipitation evolution and hydrometeor classification. Furthermore the dual-polarization radars have also raised the awareness of radar system aspects such as calibration. Microphysical characterization of precipitation and quantitative precipitation estimation are important applications that are critical in the validation of satellite borne precipitation measurements and also serves as a valuable tool in algorithm development. This paper presents the important role played by dual-polarization radar in validating space borne precipitation measurements. Starting from a historical evolution, the various configurations of dual-polarization radar are presented. Examples of raindrop size distribution retrievals and hydrometeor type classification are discussed. The quantitative precipitation estimation is a product of direct relevance to space borne observations. During the TRMM program substantial advancement was made with ground based polarization radars specially collecting unique observations in the tropics which are noted. The scientific accomplishments of relevance to space borne measurements of precipitation are summarized. The potential of dual-polarization radars and opportunities in the era of global precipitation measurement mission is also discussed.

  13. Randomized trial of computerized quantitative pretest probability in low-risk chest pain patients: effect on safety and resource use.

    PubMed

    Kline, Jeffrey A; Zeitouni, Raghid A; Hernandez-Nino, Jackeline; Jones, Alan E

    2009-06-01

    We hypothesize that the presentation of a quantitative pretest probability of acute coronary syndrome would safely reduce unnecessary resource use in low-risk emergency department (ED) chest pain patients. Randomized controlled trial of adult patients with chest pain paired with their clinicians. Patients had neither obvious evidence of acute coronary syndrome nor obvious other reason for admission. Clinicans provided their unstructured point estimate for pretest probability before randomization. Clinicans and patients in the intervention group received a printout of pretest probability of acute coronary syndrome result displayed numerically and graphically. Controls received no printout. Patients were followed for 45 days for predefined criteria of acute coronary syndrome and efficacy endpoints. Endpoints were compared between groups, with 95% confidence intervals (CIs) for differences. Four hundred were enrolled, and 31 were excluded for cocaine use or elopement from care. The mean pretest probability estimates of acute coronary syndrome were 4 (SD 5%) from clinicians and 4 (SD 6%) from the computer. Safety and efficacy endpoints for controls (n=185) versus intervention patients (n=184) were as follows: (1) delayed or missed diagnosis of acute coronary syndrome: 1 of 185 versus 0 of 184 (95% CI for difference -2.8% to 15.0%); (2) hospital admission with no significant cardiovascular diagnosis, 11% versus 5% (-0.2% to 11%); (3) thoracic imaging imparting greater than 5 mSv radiation with a negative result, 20% versus 9% (95% CI for difference = 3.8% to 18.0%); (4) median length of stay, 11.4 hours versus 9.2 hours (95% CI for difference = -2.9 to 7.6 hours); (5) reported feeling "very satisfied" with clinician explanation of problem on follow-up survey, 38% versus 49% (95% CI for difference = 0.9% to 21.0%); (6) readmitted within 7 days, 11% versus 4% (95% CI for difference = 2.5% to 13.2%). Presentation of a quantitative estimate of the pretest probability of acute coronary syndrome to clinicians and low-risk ED chest pain patients was associated with reduced resource use, without evidence of increased rate of premature discharge of patients with acute coronary syndrome.

  14. Improving Radar Quantitative Precipitation Estimation over Complex Terrain in the San Francisco Bay Area

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Chen, H.; Chandrasekar, V.

    2017-12-01

    A recent study by the State of California's Department of Water Resources has emphasized that the San Francisco Bay Area is at risk of catastrophic flooding. Therefore, accurate quantitative precipitation estimation (QPE) and forecast (QPF) are critical for protecting life and property in this region. Compared to rain gauge and meteorological satellite, ground based radar has shown great advantages for high-resolution precipitation observations in both space and time domain. In addition, the polarization diversity shows great potential to characterize precipitation microphysics through identification of different hydrometeor types and their size and shape information. Currently, all the radars comprising the U.S. National Weather Service (NWS) Weather Surveillance Radar-1988 Doppler (WSR-88D) network are operating in dual-polarization mode. Enhancement of QPE is one of the main considerations of the dual-polarization upgrade. The San Francisco Bay Area is covered by two S-band WSR-88D radars, namely, KMUX and KDAX. However, in complex terrain like the Bay Area, it is still challenging to obtain an optimal rainfall algorithm for a given set of dual-polarization measurements. In addition, the accuracy of rain rate estimates is contingent on additional factors such as bright band contamination, vertical profile of reflectivity (VPR) correction, and partial beam blockages. This presentation aims to improve radar QPE for the Bay area using advanced dual-polarization rainfall methodologies. The benefit brought by the dual-polarization upgrade of operational radar network is assessed. In addition, a pilot study of gap fill X-band radar performance is conducted in support of regional QPE system development. This paper also presents a detailed comparison between the dual-polarization radar-derived rainfall products with various operational products including the NSSL's Multi-Radar/Multi-Sensor (MRMS) system. Quantitative evaluation of various rainfall products is achieved using rainfall measurements from a validation gauge network, which shows that new dual-polarization methods can produce better QPE, and the X-band radar has excellent potential to augment WSR-88D for rainfall monitoring in this region.

  15. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  16. Estimation of the Scatterer Distribution of the Cirrhotic Liver using Ultrasonic Image

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hachiya, Hiroyuki

    1998-05-01

    In the B-mode image of the liver obtained by an ultrasonic imaging system, the speckled pattern changes with the progression of the disease such as liver cirrhosis.In this paper we present the statistical characteristics of the echo envelope of the liver, and the technique to extract information of the scatterer distribution from the normal and cirrhotic liver images using constant false alarm rate (CFAR) processing.We analyze the relationship between the extracted scatterer distribution and the stage of liver cirrhosis. The ratio of the area in which the amplitude of the processing signal is more than the threshold to the entire processed image area is related quantitatively to the stage of liver cirrhosis.It is found that the proposed technique is valid for the quantitative diagnosis of liver cirrhosis.

  17. Analysis of Membrane Lipids of Airborne Micro-Organisms

    NASA Technical Reports Server (NTRS)

    MacNaughton, Sarah

    2006-01-01

    A method of characterization of airborne micro-organisms in a given location involves (1) large-volume filtration of air onto glass-fiber filters; (2) accelerated extraction of membrane lipids of the collected micro-organisms by use of pressurized hot liquid; and (3) identification and quantitation of the lipids by use of gas chromatography and mass spectrometry. This method is suitable for use in both outdoor and indoor environments; for example, it can be used to measure airborne microbial contamination in buildings ("sick-building syndrome"). The classical approach to analysis of airborne micro-organisms is based on the growth of cultureable micro-organisms and does not provide an account of viable but noncultureable micro-organisms, which typically amount to more than 90 percent of the micro-organisms present. In contrast, the present method provides an account of all micro-organisms, including cultureable, noncultureable, aerobic, and anaerobic ones. The analysis of lipids according to this method makes it possible to estimate the number of viable airborne micro-organisms present in the sampled air and to obtain a quantitative profile of the general types of micro-organisms present along with some information about their physiological statuses.

  18. Quantified Energy Dissipation Rates in the Terrestrial Bow Shock. 1.; Analysis Techniques and Methodology

    NASA Technical Reports Server (NTRS)

    Wilson, L. B., III; Sibeck, D. G.; Breneman, A.W.; Le Contel, O.; Cully, C.; Turner, D. L.; Angelopoulos, V.; Malaspina, D. M.

    2014-01-01

    We present a detailed outline and discussion of the analysis techniques used to compare the relevance of different energy dissipation mechanisms at collisionless shock waves. We show that the low-frequency, quasi-static fields contribute less to ohmic energy dissipation, (-j · E ) (minus current density times measured electric field), than their high-frequency counterparts. In fact, we found that high-frequency, large-amplitude (greater than 100 millivolts per meter and/or greater than 1 nanotesla) waves are ubiquitous in the transition region of collisionless shocks. We quantitatively show that their fields, through wave-particle interactions, cause enough energy dissipation to regulate the global structure of collisionless shocks. The purpose of this paper, part one of two, is to outline and describe in detail the background, analysis techniques, and theoretical motivation for our new results presented in the companion paper. The companion paper presents the results of our quantitative energy dissipation rate estimates and discusses the implications. Together, the two manuscripts present the first study quantifying the contribution that high-frequency waves provide, through wave-particle interactions, to the total energy dissipation budget of collisionless shock waves.

  19. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  20. Future climate scenarios and rainfall--runoff modelling in the Upper Gallego catchment (Spain).

    PubMed

    Bürger, C M; Kolditz, O; Fowler, H J; Blenkinsop, S

    2007-08-01

    Global climate change may have large impacts on water supplies, drought or flood frequencies and magnitudes in local and regional hydrologic systems. Water authorities therefore rely on computer models for quantitative impact prediction. In this study we present kernel-based learning machine river flow models for the Upper Gallego catchment of the Ebro basin. Different learning machines were calibrated using daily gauge data. The models posed two major challenges: (1) estimation of the rainfall-runoff transfer function from the available time series is complicated by anthropogenic regulation and mountainous terrain and (2) the river flow model is weak when only climate data are used, but additional antecedent flow data seemed to lead to delayed peak flow estimation. These types of models, together with the presented downscaled climate scenarios, can be used for climate change impact assessment in the Gallego, which is important for the future management of the system.

  1. Assessing the risk posed by natural hazards to infrastructures

    NASA Astrophysics Data System (ADS)

    Eidsvig, Unni; Kristensen, Krister; Vidar Vangelsten, Bjørn

    2015-04-01

    The modern society is increasingly dependent on infrastructures to maintain its function, and disruption in one of the infrastructure systems may have severe consequences. The Norwegian municipalities have, according to legislation, a duty to carry out a risk and vulnerability analysis and plan and prepare for emergencies in a short- and long term perspective. Vulnerability analysis of the infrastructures and their interdependencies is an important part of this analysis. This paper proposes a model for assessing the risk posed by natural hazards to infrastructures. The model prescribes a three level analysis with increasing level of detail, moving from qualitative to quantitative analysis. This paper focuses on the second level, which consists of a semi-quantitative analysis. The purpose of this analysis is to perform a screening of the scenarios of natural hazards threatening the infrastructures identified in the level 1 analysis and investigate the need for further analyses, i.e. level 3 quantitative analyses. The proposed level 2 analysis considers the frequency of the natural hazard, different aspects of vulnerability including the physical vulnerability of the infrastructure itself and the societal dependency on the infrastructure. An indicator-based approach is applied, ranking the indicators on a relative scale. The proposed indicators characterize the robustness of the infrastructure, the importance of the infrastructure as well as interdependencies between society and infrastructure affecting the potential for cascading effects. Each indicator is ranked on a 1-5 scale based on pre-defined ranking criteria. The aggregated risk estimate is a combination of the semi-quantitative vulnerability indicators, as well as quantitative estimates of the frequency of the natural hazard and the number of users of the infrastructure. Case studies for two Norwegian municipalities are presented, where risk to primary road, water supply and power network threatened by storm and landslide is assessed. The application examples show that the proposed model provides a useful tool for screening of undesirable events, with the ultimate goal to reduce the societal vulnerability.

  2. Direct Estimation of Optical Parameters From Photoacoustic Time Series in Quantitative Photoacoustic Tomography.

    PubMed

    Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja

    2016-11-01

    Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.

  3. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  4. Permeability estimations and frictional flow features passing through porous media comprised of structured microbeads

    NASA Astrophysics Data System (ADS)

    Shin, C.

    2017-12-01

    Permeability estimation has been extensively researched in diverse fields; however, methods that suitably consider varying geometries and changes within the flow region, for example, hydraulic fracture closing for several years, are yet to be developed. Therefore, in the present study a new permeability estimation method is presented based on the generalized Darcy's friction flow relation, in particular, by examining frictional flow parameters and characteristics of their variations. For this examination, computational fluid dynamics (CFD) simulations of simple hydraulic fractures filled with five layers of structured microbeads and accompanied by geometry changes and flow transitions are performed. Consequently, it was checked whether the main structures and shapes of each flow path are preserved, even for geometry variations within porous media. However, the scarcity and discontinuity of streamlines increase dramatically in the transient- and turbulent-flow regions. The quantitative and analytic examinations of the frictional flow features were also performed. Accordingly, the modified frictional flow parameters were successfully presented as similarity parameters of porous flows. In conclusion, the generalized Darcy's friction flow relation and friction equivalent permeability (FEP) equation were both modified using the similarity parameters. For verification, the FEP values of the other aperture models were estimated and then it was checked whether they agreed well with the original permeability values. Ultimately, the proposed and verified method is expected to efficiently estimate permeability variations in porous media with changing geometric factors and flow regions, including such instances as hydraulic fracture closings.

  5. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  6. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  7. Performance of the Bio-Rad Geenius HIV1/2 Supplemental Assay in Detecting "Recent" HIV Infection and Calculating Population Incidence.

    PubMed

    Keating, Sheila M; Kassanjee, Reshma; Lebedeva, Mila; Facente, Shelley N; MacArthur, Jeffrey C; Grebe, Eduard; Murphy, Gary; Welte, Alex; Martin, Jeffrey N; Little, Susan; Price, Matthew A; Kallas, Esper G; Busch, Michael P; Pilcher, Christopher D

    2016-12-15

    HIV seroconversion biomarkers are being used in cross-sectional studies for HIV incidence estimation. Bio-Rad Geenius HIV-1/2 Supplemental Assay is an immunochromatographic single-use assay that measures antibodies (Ab) against multiple HIV-1/2 antigens. The objective of this study was to determine whether the Geenius assay could additionally be used for recency estimation. This assay was developed for HIV-1/2 confirmation; however, quantitative data acquired give information on increasing concentration and diversity of antibody responses over time during seroconversion. A quantitative threshold of recent HIV infection was proposed to determine "recent" or "nonrecent" HIV infection; performance using this cutoff was evaluated. We tested 2500 highly characterized specimens from research subjects in the United States, Brazil, and Africa with well-defined durations of HIV infection. Regression and frequency estimation were used to estimate assay properties relevant to HIV incidence measurement: mean duration of recent infection (MDRI), false-recent rate, and assay reproducibility and robustness. Using the manufacturer's proposed cutoff index of 1.5 to identify "recent" infection, the assay has an estimated false-recent rate of 4.1% (95% CI: 2.2 to 7.0) and MDRI of 179 days (155 to 201) in specimens from treatment-naive subjects, presenting performance challenges similar to other incidence assays. Lower index cutoffs associated with lower MDRI gave a lower rate of false-recent results. These data suggest that with additional interpretive analysis of the band intensities using an algorithm and cutoff, the Geenius HIV-1/2 Supplemental Assay can be used to identify recent HIV infection in addition to confirming the presence of HIV-1 and HIV-2 antibodies.

  8. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  9. Quantitative Story Telling: Initial steps towards bridging perspectives and tools for a robust nexus assessment

    NASA Astrophysics Data System (ADS)

    Cabello, Violeta

    2017-04-01

    This communication will present the advancement of an innovative analytical framework for the analysis of Water-Energy-Food-Climate Nexus termed Quantitative Story Telling (QST). The methodology is currently under development within the H2020 project MAGIC - Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (www.magic-nexus.eu). The key innovation of QST is that it bridges qualitative and quantitative analytical tools into an iterative research process in which each step is built and validated in interaction with stakeholders. The qualitative analysis focusses on the identification of the narratives behind the development of relevant WEFC-Nexus policies and innovations. The quantitative engine is the Multi-Scale Analysis of Societal and Ecosystem Metabolism (MuSIASEM), a resource accounting toolkit capable of integrating multiple analytical dimensions at different scales through relational analysis. Although QST may not be labelled a data-driven but a story-driven approach, I will argue that improving models per se may not lead to an improved understanding of WEF-Nexus problems unless we are capable of generating more robust narratives to frame them. The communication will cover an introduction to MAGIC project, the basic concepts of QST and a case study focussed on agricultural production in a semi-arid region in Southern Spain. Data requirements for this case study and the limitations to find, access or estimate them will be presented alongside a reflection on the relation between analytical scales and data availability.

  10. Quantitative evaluation of dual-flip-angle T1 mapping on DCE-MRI kinetic parameter estimation in head and neck

    PubMed Central

    Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D

    2012-01-01

    Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084

  11. A Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) Quantitative Analysis Method Based on the Auto-Selection of an Internal Reference Line and Optimized Estimation of Plasma Temperature.

    PubMed

    Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong

    2018-01-01

    The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.

  12. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  13. Long-Term Changes in Lower Tropospheric Baseline Ozone Concentrations:. [Comparing Chemistry-Climate Models and Observations at Northern Mid-Latitudes

    NASA Technical Reports Server (NTRS)

    Parrish, D. D.; Lamarque, J.-F.; Naik, V.; Horowitz, L.; Shindell, D. T.; Staehelin, J.; Derwent, R.; Cooper, O. R.; Tanimoto, H.; Volz-Thomas, A.; hide

    2014-01-01

    Two recent papers have quantified long-term ozone (O3) changes observed at northernmidlatitude sites that are believed to represent baseline (here understood as representative of continental to hemispheric scales) conditions. Three chemistry-climate models (NCAR CAM-chem, GFDL-CM3, and GISS-E2-R) have calculated retrospective tropospheric O3 concentrations as part of the Atmospheric Chemistry and Climate Model Intercomparison Project and Coupled Model Intercomparison Project Phase 5 model intercomparisons. We present an approach for quantitative comparisons of model results with measurements for seasonally averaged O3 concentrations. There is considerable qualitative agreement between the measurements and the models, but there are also substantial and consistent quantitative disagreements. Most notably, models (1) overestimate absolute O3 mixing ratios, on average by approximately 5 to 17 ppbv in the year 2000, (2) capture only approximately 50% of O3 changes observed over the past five to six decades, and little of observed seasonal differences, and (3) capture approximately 25 to 45% of the rate of change of the long-term changes. These disagreements are significant enough to indicate that only limited confidence can be placed on estimates of present-day radiative forcing of tropospheric O3 derived from modeled historic concentration changes and on predicted future O3 concentrations. Evidently our understanding of tropospheric O3, or the incorporation of chemistry and transport processes into current chemical climate models, is incomplete. Modeled O3 trends approximately parallel estimated trends in anthropogenic emissions of NO(sub x), an important O3 precursor, while measured O3 changes increase more rapidly than these emission estimates.

  14. Towards a Middle Pleistocene terrestrial climate reconstruction based on herpetofaunal assemblages from the Iberian Peninsula: State of the art and perspectives

    NASA Astrophysics Data System (ADS)

    Blain, Hugues-Alexandre; Cruz Silva, José Alberto; Jiménez Arenas, Juan Manuel; Margari, Vasiliki; Roucoux, Katherine

    2018-07-01

    The pattern of the varying climatic conditions in southern Europe over the last million years is well known from isotope studies on deep-ocean sediment cores and the long pollen records that have been produced for lacustrine and marine sedimentary sequences from Greece, Italy and the Iberian margin. However, although relative glacial and interglacial intensities are well studied, there are still few proxies that permit quantitative terrestrial temperature and precipitation reconstruction. In this context, fauna-based climate reconstructions based on evidence preserved in archaeological or palaeontological sites are of great interest, even if they only document short windows of that climate variability, because (a) they provide a range of temperature and precipitation estimates that are understandable in comparison with present climate; (b) they may allow the testing of predicted temperature changes under scenarios of future climate change; and (c) quantitative temperature and precipitation estimates for past glacials and interglacials for specific regions/latitudes can help to understand their effects on flora, fauna and hominids, as they are directly associated with those cultural and/or biological events. Moreover such reconstructions can bring further arguments to the discussion about important climatic events like the Mid-Bruhnes Event, a climatic transition between moderate warmths and greater warmths during interglacials. In this paper we review a decade of amphibian- and reptile-based climate reconstructions carried out for the Iberian Peninsula using the Mutual Ecogeographic Range method in order to present a regional synthesis from MIS 22 to MIS 6, discuss the climate pattern in relation to the Mid-Bruhnes Event and the thermal amplitude suggested by these estimates and finally to identify the chronological gaps that have still to be investigated.

  15. Dynamic soft tissue deformation estimation based on energy analysis

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Yao, Bin

    2016-10-01

    The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.

  16. A quantitative approach to combine sources in stable isotope mixing models

    EPA Science Inventory

    Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...

  17. The ultraviolet environment of Mars: biological implications past, present, and future.

    PubMed

    Cockell, C S; Catling, D C; Davis, W L; Snook, K; Kepner, R L; Lee, P; McKay, C P

    2000-08-01

    A radiative transfer model is used to quantitatively investigate aspects of the martian ultraviolet radiation environment, past and present. Biological action spectra for DNA inactivation and chloroplast (photosystem) inhibition are used to estimate biologically effective irradiances for the martian surface under cloudless skies. Over time Mars has probably experienced an increasingly inhospitable photobiological environment, with present instantaneous DNA weighted irradiances 3.5-fold higher than they may have been on early Mars. This is in contrast to the surface of Earth, which experienced an ozone amelioration of the photobiological environment during the Proterozoic and now has DNA weighted irradiances almost three orders of magnitude lower than early Earth. Although the present-day martian UV flux is similar to that of early Earth and thus may not be a critical limitation to life in the evolutionary context, it is a constraint to an unadapted biota and will rapidly kill spacecraft-borne microbes not covered by a martian dust layer. Microbial strategies for protection against UV radiation are considered in the light of martian photobiological calculations, past and present. Data are also presented for the effects of hypothetical planetary atmospheric manipulations on the martian UV radiation environment with estimates of the biological consequences of such manipulations.

  18. The ultraviolet environment of Mars: biological implications past, present, and future

    NASA Technical Reports Server (NTRS)

    Cockell, C. S.; Catling, D. C.; Davis, W. L.; Snook, K.; Kepner, R. L.; Lee, P.; McKay, C. P.

    2000-01-01

    A radiative transfer model is used to quantitatively investigate aspects of the martian ultraviolet radiation environment, past and present. Biological action spectra for DNA inactivation and chloroplast (photosystem) inhibition are used to estimate biologically effective irradiances for the martian surface under cloudless skies. Over time Mars has probably experienced an increasingly inhospitable photobiological environment, with present instantaneous DNA weighted irradiances 3.5-fold higher than they may have been on early Mars. This is in contrast to the surface of Earth, which experienced an ozone amelioration of the photobiological environment during the Proterozoic and now has DNA weighted irradiances almost three orders of magnitude lower than early Earth. Although the present-day martian UV flux is similar to that of early Earth and thus may not be a critical limitation to life in the evolutionary context, it is a constraint to an unadapted biota and will rapidly kill spacecraft-borne microbes not covered by a martian dust layer. Microbial strategies for protection against UV radiation are considered in the light of martian photobiological calculations, past and present. Data are also presented for the effects of hypothetical planetary atmospheric manipulations on the martian UV radiation environment with estimates of the biological consequences of such manipulations.

  19. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  20. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006

    PubMed Central

    Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.

    2016-01-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418

  1. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  2. Effects of radiobiological uncertainty on vehicle and habitat shield design for missions to the moon and Mars

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Nealy, John E.; Schimmerling, Walter; Cucinotta, Francis A.; Wood, James S.

    1993-01-01

    Some consequences of uncertainties in radiobiological risk due to galactic cosmic ray (GCR) exposure are analyzed for their effect on engineering designs for the first lunar outpost and a mission to explore Mars. This report presents the plausible effect of biological uncertainties, the design changes necessary to reduce the uncertainties to acceptable levels for a safe mission, and an evaluation of the mission redesign cost. Estimates of the amount of shield mass required to compensate for radiobiological uncertainty are given for a simplified vehicle and habitat. The additional amount of shield mass required to provide a safety factor for uncertainty compensation is calculated from the expected response to GCR exposure. The amount of shield mass greatly increases in the estimated range of biological uncertainty, thus, escalating the estimated cost of the mission. The estimates are used as a quantitative example for the cost-effectiveness of research in radiation biophysics and radiation physics.

  3. Salmonella risk to consumers via pork is related to the Salmonella prevalence in pig feed.

    PubMed

    Rönnqvist, M; Välttilä, V; Ranta, J; Tuominen, P

    2018-05-01

    Pigs are an important source of human infections with Salmonella, one of the most common causes of sporadic gastrointestinal infections and foodborne outbreaks in the European region. Feed has been estimated to be a significant source of Salmonella in piggeries in countries of a low Salmonella prevalence. To estimate Salmonella risk to consumers via the pork production chain, including feed production, a quantitative risk assessment model was constructed. The Salmonella prevalence in feeds and in animals was estimated to be generally low in Finland, but the relative importance of feed as a source of Salmonella in pigs was estimated as potentially high. Discontinuation of the present strict Salmonella control could increase the risk of Salmonella in slaughter pigs and consequent infections in consumers. The increased use of low risk and controlled feed ingredients could result in a consistently lower residual contamination in pigs and help the tracing and control of the sources of infections. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Modulation transfer function estimation of optical lens system by adaptive neuro-fuzzy methodology

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Pavlović, Nenad T.; Anuar, Nor Badrul; Kiah, Miss Laiha Mat

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to estimate MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.

  5. Topics in Extrasolar Planet Characterization

    NASA Astrophysics Data System (ADS)

    Howe, Alex Ryan

    I present four papers exploring different topics in the area of characterizing the atmospheric and bulk properties of extrasolar planets. In these papers, I present two new codes, in various forms, for modeling these objects. A code to generate theoretical models of transit spectra of exoplanets is featured in the first paper and is refined and expanded into the APOLLO code for spectral modeling and parameter retrieval in the fourth paper. Another code to model the internal structure and evolution of planets is featured in the second and third papers. The first paper presents transit spectra models of GJ 1214b and other super-Earth and mini-Neptune type planets--planets with a "solid", terrestrial composition and relatively small planets with a thick hydrogen-helium atmosphere, respectively--and fit them to observational data to estimate the atmospheric compositions and cloud properties of these planets. The second paper presents structural models of super-Earth and mini-Neptune type planets and estimates their bulk compositions from mass and radius estimates. The third paper refines these models with evolutionary calculations of thermal contraction and ultraviolet-driven mass loss. Here, we estimate the boundaries of the parameter space in which planets lose their initial hydrogen-helium atmospheres completely, and we also present formation and evolution scenarios for the planets in the Kepler-11 system. The fourth paper uses more refined transit spectra models, this time for hot jupiter type planets, to explore the methods to design optimal observing programs for the James Webb Space Telescope to quantitatively measure the atmospheric compositions and other properties of these planets.

  6. Quantitative Comparison of PET and Bremsstrahlung SPECT for Imaging the In Vivo Yttrium-90 Microsphere Distribution after Liver Radioembolization

    PubMed Central

    Elschot, Mattijs; Vermolen, Bart J.; Lam, Marnix G. E. H.; de Keizer, Bart; van den Bosch, Maurice A. A. J.; de Jong, Hugo W. A. M.

    2013-01-01

    Background After yttrium-90 (90Y) microsphere radioembolization (RE), evaluation of extrahepatic activity and liver dosimetry is typically performed on 90Y Bremsstrahlung SPECT images. Since these images demonstrate a low quantitative accuracy, 90Y PET has been suggested as an alternative. The aim of this study is to quantitatively compare SPECT and state-of-the-art PET on the ability to detect small accumulations of 90Y and on the accuracy of liver dosimetry. Methodology/Principal Findings SPECT/CT and PET/CT phantom data were acquired using several acquisition and reconstruction protocols, including resolution recovery and Time-Of-Flight (TOF) PET. Image contrast and noise were compared using a torso-shaped phantom containing six hot spheres of various sizes. The ability to detect extra- and intrahepatic accumulations of activity was tested by quantitative evaluation of the visibility and unique detectability of the phantom hot spheres. Image-based dose estimates of the phantom were compared to the true dose. For clinical illustration, the SPECT and PET-based estimated liver dose distributions of five RE patients were compared. At equal noise level, PET showed higher contrast recovery coefficients than SPECT. The highest contrast recovery coefficients were obtained with TOF PET reconstruction including resolution recovery. All six spheres were consistently visible on SPECT and PET images, but PET was able to uniquely detect smaller spheres than SPECT. TOF PET-based estimates of the dose in the phantom spheres were more accurate than SPECT-based dose estimates, with underestimations ranging from 45% (10-mm sphere) to 11% (37-mm sphere) for PET, and 75% to 58% for SPECT, respectively. The differences between TOF PET and SPECT dose-estimates were supported by the patient data. Conclusions/Significance In this study we quantitatively demonstrated that the image quality of state-of-the-art PET is superior over Bremsstrahlung SPECT for the assessment of the 90Y microsphere distribution after radioembolization. PMID:23405207

  7. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    PubMed Central

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-01-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991

  8. Modeling Cape- and Ridge-Associated Marine Sand Deposits; A Focus on the U.S. Atlantic Continental Shelf

    USGS Publications Warehouse

    Bliss, James D.; Williams, S. Jeffress; Bolm, Karen S.

    2009-01-01

    Cape- and ridge-associated marine sand deposits, which accumulate on storm-dominated continental shelves that are undergoing Holocene marine transgression, are particularly notable in a segment of the U.S. Atlantic Continental Shelf that extends southward from the east tip of Long Island, N.Y., and eastward from Cape May at the south end of the New Jersey shoreline. These sand deposits commonly contain sand suitable for shore protection in the form of beach nourishment. Increasing demand for marine sand raises questions about both short- and long-term potential supply and the sustainability of beach nourishment with the prospects of accelerating sea-level rise and increasing storm activity. To address these important issues, quantitative assessments of the volume of marine sand resources are needed. Currently, the U.S. Geological Survey is undertaking these assessments through its national Marine Aggregates and Resources Program (URL http://woodshole.er.usgs.gov/project-pages/aggregates/). In this chapter, we present a hypothetical example of a quantitative assessment of cape-and ridge-associated marine sand deposits in the study area, using proven tools of mineral-resource assessment. Applying these tools requires new models that summarize essential data on the quantity and quality of these deposits. Two representative types of model are descriptive models, which consist of a narrative that allows for a consistent recognition of cape-and ridge-associated marine sand deposits, and quantitative models, which consist of empirical statistical distributions that describe significant deposit characteristics, such as volume and grain-size distribution. Variables of the marine sand deposits considered for quantitative modeling in this study include area, thickness, mean grain size, grain sorting, volume, proportion of sand-dominated facies, and spatial density, of which spatial density is particularly helpful in estimating the number of undiscovered deposits within an assessment area. A Monte Carlo simulation that combines the volume of sand-dominated-facies models with estimates of the hypothetical probable number of undiscovered deposits provides a probabilistic approach to estimating marine sand resources within parts of the U.S. Atlantic Continental Shelf and other comparable marine shelves worldwide.

  9. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.

  10. Models and metrics for software management and engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1988-01-01

    This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.

  11. Electronic structure and microscopic model of CoNb2O6

    NASA Astrophysics Data System (ADS)

    Molla, Kaimujjaman; Rahaman, Badiur

    2018-05-01

    We present the first principle density functional calculations to figure out the underlying spin model of CoNb2O6. The first principles calculations define the main paths of superexchange interaction between Co spins in this compound. We discuss the nature of the exchange paths and provide quantitative estimates of magnetic exchange couplings. A microscopic modeling based on analysis of the electronic structure of this system puts it in the interesting class of weakly couple geometrically frustrated isosceles triangular Ising antiferromagnet.

  12. Quantitative food intake in the EPIC-Germany cohorts. European Investigation into Cancer and Nutrition.

    PubMed

    Schulze, M B; Brandstetter, B R; Kroke, A; Wahrendorf, J; Boeing, H

    1999-01-01

    The EPIC-Heidelberg and the EPIC-Potsdam studies with about 53,000 study participants represent the German contribution to the EPIC (European Investigation into Cancer and Nutrition) cohort study. Within the EPIC study, standardized 24-hour dietary recalls were applied as a quantitative calibration method in order to estimate the amount of scaling bias introduced by the varying center-specific dietary assessment methods. This article presents intake of food items and food groups in the two German cohorts estimated by 24-hour quantitative dietary recalls. Recalls from 1,013 men and 1,078 women in Heidelberg and 1,032 men and 898 women in Potsdam were included in the analysis. The intake of recorded food items or recipe ingredients as well as fat used for cooking was summarized into 16 main food groups and a variety of different subgroups stratified by sex and weighted for the day of the week and age. In more than 90% of the recalls, consumption of dairy products, cereals and cereal products, bread, fat, and non-alcoholic beverages, particularly coffee/tea, was reported. Inter-cohort evaluations revealed that bread, potatoes, fruit and fat were consumed in higher amounts in the Potsdam cohort while the opposite was found for pasta/rice, non-alcoholic, and alcoholic beverages. It was concluded that the exposure variation was increased by having two instead of one EPIC study centers in Germany. Copyright 1999 S. Karger AG, Basel

  13. Quantitative morphology of the vascularisation of organs: A stereological approach illustrated using the cardiac circulation.

    PubMed

    Mühlfeld, Christian

    2014-01-01

    The vasculature of the heart is able to adapt to various physiological and pathological stimuli and its failure to do so is well-reflected by the great impact of ischaemic heart disease on personal morbidity and mortality and on the health care systems of industrial countries. Studies on physiological or genetic interventions as well as therapeutic angiogenesis rely on quantitative data to characterize the effects in a statistically robust way. The gold standard for obtaining quantitative morphological data is design-based stereology which allows the estimation of volume, surface area, length and number of blood vessels as well as their thickness, diameter or wall composition. Unfortunately, the use of stereological methods for this purpose is still rare. One of the reasons for this is the fact that the transfer of the theoretical foundations into laboratory practice requires a remarkable amount of considerations before touching the first piece of tissue. These considerations, however, are often based on already acquired experience and are usually not dealt with in stereological review articles. The present article therefore delineates the procedures for estimating the most important characteristics of the cardiac vasculature and highlights potential problems and their practical solutions. Worked examples are used to illustrate the methods and provide examples of the calculations. Hopefully, the considerations and examples contained herein will provide researchers in this field with the necessary equipment to add stereological methods to their study designs. Copyright © 2012 Elsevier GmbH. All rights reserved.

  14. High temporal resolution dynamic contrast-enhanced MRI using compressed sensing-combined sequence in quantitative renal perfusion measurement.

    PubMed

    Chen, Bin; Zhao, Kai; Li, Bo; Cai, Wenchao; Wang, Xiaoying; Zhang, Jue; Fang, Jing

    2015-10-01

    To demonstrate the feasibility of the improved temporal resolution by using compressed sensing (CS) combined imaging sequence in dynamic contrast-enhanced MRI (DCE-MRI) of kidney, and investigate its quantitative effects on renal perfusion measurements. Ten rabbits were included in the accelerated scans with a CS-combined 3D pulse sequence. To evaluate the image quality, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were compared between the proposed CS strategy and the conventional full sampling method. Moreover, renal perfusion was estimated by using the separable compartmental model in both CS simulation and realistic CS acquisitions. The CS method showed DCE-MRI images with improved temporal resolution and acceptable image contrast, while presenting significantly higher SNR than the fully sampled images (p<.01) at 2-, 3- and 4-X acceleration. In quantitative measurements, renal perfusion results were in good agreement with the fully sampled one (concordance correlation coefficient=0.95, 0.91, 0.88) at 2-, 3- and 4-X acceleration in CS simulation. Moreover, in realistic acquisitions, the estimated perfusion by the separable compartmental model exhibited no significant differences (p>.05) between each CS-accelerated acquisition and the full sampling method. The CS-combined 3D sequence could improve the temporal resolution for DCE-MRI in kidney while yielding diagnostically acceptable image quality, and it could provide effective measurements of renal perfusion. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Inter-rater reliability of motor unit number estimates and quantitative motor unit analysis in the tibialis anterior muscle.

    PubMed

    Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L

    2009-05-01

    To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.

  16. Development of quantitative screen for 1550 chemicals with GC-MS.

    PubMed

    Bergmann, Alan J; Points, Gary L; Scott, Richard P; Wilson, Glenn; Anderson, Kim A

    2018-05-01

    With hundreds of thousands of chemicals in the environment, effective monitoring requires high-throughput analytical techniques. This paper presents a quantitative screening method for 1550 chemicals based on statistical modeling of responses with identification and integration performed using deconvolution reporting software. The method was evaluated with representative environmental samples. We tested biological extracts, low-density polyethylene, and silicone passive sampling devices spiked with known concentrations of 196 representative chemicals. A multiple linear regression (R 2  = 0.80) was developed with molecular weight, logP, polar surface area, and fractional ion abundance to predict chemical responses within a factor of 2.5. Linearity beyond the calibration had R 2  > 0.97 for three orders of magnitude. Median limits of quantitation were estimated to be 201 pg/μL (1.9× standard deviation). The number of detected chemicals and the accuracy of quantitation were similar for environmental samples and standard solutions. To our knowledge, this is the most precise method for the largest number of semi-volatile organic chemicals lacking authentic standards. Accessible instrumentation and software make this method cost effective in quantifying a large, customizable list of chemicals. When paired with silicone wristband passive samplers, this quantitative screen will be very useful for epidemiology where binning of concentrations is common. Graphical abstract A multiple linear regression of chemical responses measured with GC-MS allowed quantitation of 1550 chemicals in samples such as silicone wristbands.

  17. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies.

    PubMed

    Silva-Rodríguez, Jesús; Aguiar, Pablo; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Alvaro

    2014-05-01

    Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.

  18. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela

    Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manualmore » ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.« less

  19. Incorporating spatial context into statistical classification of multidimensional image data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.

    1981-01-01

    Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.

  20. A quantitative investigation of the fracture pump-in/flowback test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plahn, S.V.; Nolte, K.G.; Thompson, L.G.

    1997-02-01

    Fracture-closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures (BHP`s) during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test, where strong indications of fracture closure are rarely seen. Various techniques are used to extract closure pressure from the flowback-pressure response. Unfortunately, these techniques give different estimates for closure pressure, and their theoretical bases are not well established. The authors present results that place the PIFB test on a firmer foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. On the basis of their simulation results, they propose interpretation techniques that give better estimates of closure pressure than existing techniques.« less

  1. A quantitative investigation of the fracture pump-in/flowback test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plahn, S.V.; Nolte, K.G.; Miska, S.

    1995-12-31

    Fracture closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test where strong indications of fracture closure are rarely seen. Various techniques exist for extracting closure pressure from the flowback pressure response. Unfortunately, these procedures give different estimates for closure pressure and their theoretical bases are not well established. We present results that place the PIFB test on a more solid foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. Based on our simulation results, we propose an interpretation procedure which gives better estimates for closure pressure than existing techniques.« less

  2. Estimation of Transformation Temperatures in Ti-Ni-Pd Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Narayana, P. L.; Kim, Seong-Woong; Hong, Jae-Keun; Reddy, N. S.; Yeom, Jong-Taek

    2018-03-01

    The present study focused on estimating the complex nonlinear relationship between the composition and phase transformation temperatures of Ti-Ni-Pd shape memory alloys by artificial neural networks (ANN). The ANN models were developed by using the experimental data of Ti-Ni-Pd alloys. It was found that the predictions are in good agreement with the trained and unseen test data of existing alloys. The developed model was able to simulate new virtual alloys to quantitatively estimate the effect of Ti, Ni, and Pd on transformation temperatures. The transformation temperature behavior of these virtual alloys is validated by conducting new experiments on the Ti-rich thin film that was deposited using multi target sputtering equipment. The transformation behavior of the film was measured by varying the composition with the help of aging treatment. The predicted trend of transformational temperatures was explained with the help of experimental results.

  3. Nighttime image dehazing using local atmospheric selection rule and weighted entropy for visible-light systems

    NASA Astrophysics Data System (ADS)

    Park, Dubok; Han, David K.; Ko, Hanseok

    2017-05-01

    Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.

  4. Food intake and growth of Sarsia tubulosa (SARS, 1835), with quantitative estimates of predation on copepod populations

    NASA Astrophysics Data System (ADS)

    Daan, Rogier

    In laboratory tests food intake by the hydromedusa Sarsia tubulosa, which feeds on copepods, was quantified. Estimates of maximum predation are presented for 10 size classes of Sarsia. Growth rates, too, were determined in the laboratory, at 12°C under ad libitum food conditions. Mean gross food conversion for all size classes averaged 12%. From the results of a frequent sampling programme, carried out in the Texelstroom (a tidal inlet of the Dutch Wadden Sea) in 1983, growth rates of Sarsia in the field equalled maximum growth under experimental conditions, which suggests that Sarsia in situ can feed at an optimum level. Two estimates of predation pressure in the field matched very closely and lead to the conclusion that the impact of Sarsia predation on copepod standing stocks in the Dutch coastal area, including the Wadden Sea, is generally negligible.

  5. RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.

    PubMed

    Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z

    2017-04-01

    We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. An information measure for class discrimination. [in remote sensing of crop observation

    NASA Technical Reports Server (NTRS)

    Shen, S. S.; Badhwar, G. D.

    1986-01-01

    This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.

  7. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  8. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  9. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  10. [Quantitative determination of 7-phenoxyacetamidodesacetoxycephalosporanic acid].

    PubMed

    Dzegilenko, N B; Riabova, N M; Zinchenko, E Ia; Korchagin, V B

    1976-11-01

    7-Phenoxyacetamidodesacetoxycephalosporanic acid, an intermediate product in synthesis of cephalexin, was prepared by oxydation of phenoxymethylpenicillin into the respective sulphoxide and transformation of the latter. The UV-spectra of the reaction products were studied. A quantitative method is proposed for determination of 7-phenoxyacetamidodesacetoxycephalosporanic acid in the finished products based on estimation os the coefficient of specific extinction of the ethanol solutions at a wave length of 268 um in the UV-spectrum region in combination with semiquantitative estimation of the admixtures with the method of thin-layer chromatography.

  11. Estimating the Post-Mortem Interval of skeletonized remains: The use of Infrared spectroscopy and Raman spectro-microscopy

    NASA Astrophysics Data System (ADS)

    Creagh, Dudley; Cameron, Alyce

    2017-08-01

    When skeletonized remains are found it becomes a police task to determine to identify the body and establish the cause of death. It assists investigators if the Post-Mortem Interval (PMI) can be established. Hitherto no reliable qualitative method of estimating the PMI has been found. A quantitative method has yet to be developed. This paper shows that IR spectroscopy and Raman microscopy have the potential to form the basis of a quantitative method.

  12. Transcript copy number estimation using a mouse whole-genome oligonucleotide microarray

    PubMed Central

    Carter, Mark G; Sharov, Alexei A; VanBuren, Vincent; Dudekula, Dawood B; Carmack, Condie E; Nelson, Charlie; Ko, Minoru SH

    2005-01-01

    The ability to quantitatively measure the expression of all genes in a given tissue or cell with a single assay is an exciting promise of gene-expression profiling technology. An in situ-synthesized 60-mer oligonucleotide microarray designed to detect transcripts from all mouse genes was validated, as well as a set of exogenous RNA controls derived from the yeast genome (made freely available without restriction), which allow quantitative estimation of absolute endogenous transcript abundance. PMID:15998450

  13. Estimation of genetic parameters and their sampling variances of quantitative traits in the type 2 modified augmented design

    USDA-ARS?s Scientific Manuscript database

    We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...

  14. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.

    PubMed

    Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris

    2016-04-21

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  15. The effect of respiratory induced density variations on non-TOF PET quantitation in the lung

    NASA Astrophysics Data System (ADS)

    Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris

    2016-04-01

    Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.

  16. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Park, E.; Choi, J.; Han, W. S.; Yun, S. T.

    2016-12-01

    A subagging regression (SBR) method for the analysis of groundwater data pertaining to the estimation of trend and the associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of the other methods and the uncertainties are reasonably estimated where the others have no uncertainty analysis option. To validate further, real quantitative and qualitative data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by SBR, whereas the GPR has limitations in representing the variability of non-Gaussian skewed data. From the implementations, it is determined that the SBR method has potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data.

  17. Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model

    PubMed Central

    Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.

    2012-01-01

    Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315

  18. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  19. Space Radiation and Exploration - Information for the Augustine Committee Review

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis; Semones, Edward; Kim, Myung-Hee; Jackson, Lori

    2009-01-01

    Space radiation presents significant health risks including mortality for Exploration missions: a) Galactic cosmic ray (GCR) heavy ions are distinct from radiation that occurs on Earth leading to different biological impacts. b) Large uncertainties in GCR risk projections impact ability to design and assess mitigation approaches and select crew. c) Solar Proton Events (SPEs) require new operational and shielding approaches and new biological data on risks. Risk estimates are changing as new scientific knowledge is gained: a) Research on biological effects of space radiation show qualitative and quantitative differences with X- or gamma-rays. b) Expert recommendations and regulatory policy are changing. c) New knowledge leads to changes in estimates for the number of days in space to stay below Permissible Exposure Limits (PELS).

  20. On the scaling of the distribution of daily price fluctuations in the Mexican financial market index

    NASA Astrophysics Data System (ADS)

    Alfonso, Léster; Mansilla, Ricardo; Terrero-Escalante, César A.

    2012-05-01

    In this paper, a statistical analysis of log-return fluctuations of the IPC, the Mexican Stock Market Index is presented. A sample of daily data covering the period from 04/09/2000-04/09/2010 was analyzed, and fitted to different distributions. Tests of the goodness of fit were performed in order to quantitatively asses the quality of the estimation. Special attention was paid to the impact of the size of the sample on the estimated decay of the distributions tail. In this study a forceful rejection of normality was obtained. On the other hand, the null hypothesis that the log-fluctuations are fitted to a α-stable Lévy distribution cannot be rejected at the 5% significance level.

  1. Vortex energy landscape from real space imaging analysis of YBa2Cu3O7 with different defect structures

    NASA Astrophysics Data System (ADS)

    Luccas, R. F.; Granados, X.; Obradors, X.; Puig, T.

    2014-10-01

    A methodology based on real space vortex image analysis is presented able to estimate semi-quantitatively the relevant energy densities of an arbitrary array of vortices, map the interaction energy distributions and evaluate the pinning energy associated to particular defects. The combined study using nanostructuration tools, a vortex visualization technique and the energy method is seen as an opportunity to estimate vortex pinning potentials strengths. Particularly, spatial distributions of vortex energy densities induced by surface nanoindented scratches are evaluated and compared to those of twin boundaries. This comparative study underlines the remarkable role of surface nanoscratches in pinning vortices and its potentiality in the design of novel devices for pinning and guiding vortex motion.

  2. A quantitative risk analysis approach to port hydrocarbon logistics.

    PubMed

    Ronza, A; Carol, S; Espejo, V; Vílchez, J A; Arnaldos, J

    2006-01-16

    A method is presented that allows quantitative risk analysis to be performed on marine hydrocarbon terminals sited in ports. A significant gap was identified in the technical literature on QRA for the handling of hazardous materials in harbours published prior to this work. The analysis is extended to tanker navigation through port waters and loading and unloading facilities. The steps of the method are discussed, beginning with data collecting. As to accident scenario identification, an approach is proposed that takes into account minor and massive spills due to loading arm failures and tank rupture. Frequency estimation is thoroughly reviewed and a shortcut approach is proposed for frequency calculation. This allows for the two-fold possibility of a tanker colliding/grounding at/near the berth or while navigating to/from the berth. A number of probability data defining the possibility of a cargo spill after an external impact on a tanker are discussed. As to consequence and vulnerability estimates, a scheme is proposed for the use of ratios between the numbers of fatal victims, injured and evacuated people. Finally, an example application is given, based on a pilot study conducted in the Port of Barcelona, where the method was tested.

  3. Automatic detection and quantitative analysis of cells in the mouse primary motor cortex

    NASA Astrophysics Data System (ADS)

    Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui

    2014-09-01

    Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.

  4. A spatial Bayesian network model to assess the benefits of early warning for urban flood risk to people

    NASA Astrophysics Data System (ADS)

    Balbi, Stefano; Villa, Ferdinando; Mojtahed, Vahid; Hegetschweiler, Karin Tessa; Giupponi, Carlo

    2016-06-01

    This article presents a novel methodology to assess flood risk to people by integrating people's vulnerability and ability to cushion hazards through coping and adapting. The proposed approach extends traditional risk assessments beyond material damages; complements quantitative and semi-quantitative data with subjective and local knowledge, improving the use of commonly available information; and produces estimates of model uncertainty by providing probability distributions for all of its outputs. Flood risk to people is modeled using a spatially explicit Bayesian network model calibrated on expert opinion. Risk is assessed in terms of (1) likelihood of non-fatal physical injury, (2) likelihood of post-traumatic stress disorder and (3) likelihood of death. The study area covers the lower part of the Sihl valley (Switzerland) including the city of Zurich. The model is used to estimate the effect of improving an existing early warning system, taking into account the reliability, lead time and scope (i.e., coverage of people reached by the warning). Model results indicate that the potential benefits of an improved early warning in terms of avoided human impacts are particularly relevant in case of a major flood event.

  5. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    PubMed

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-02-01

    We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.

  7. Analysis of Sequence Data Under Multivariate Trait-Dependent Sampling.

    PubMed

    Tao, Ran; Zeng, Donglin; Franceschini, Nora; North, Kari E; Boerwinkle, Eric; Lin, Dan-Yu

    2015-06-01

    High-throughput DNA sequencing allows for the genotyping of common and rare variants for genetic association studies. At the present time and for the foreseeable future, it is not economically feasible to sequence all individuals in a large cohort. A cost-effective strategy is to sequence those individuals with extreme values of a quantitative trait. We consider the design under which the sampling depends on multiple quantitative traits. Under such trait-dependent sampling, standard linear regression analysis can result in bias of parameter estimation, inflation of type I error, and loss of power. We construct a likelihood function that properly reflects the sampling mechanism and utilizes all available data. We implement a computationally efficient EM algorithm and establish the theoretical properties of the resulting maximum likelihood estimators. Our methods can be used to perform separate inference on each trait or simultaneous inference on multiple traits. We pay special attention to gene-level association tests for rare variants. We demonstrate the superiority of the proposed methods over standard linear regression through extensive simulation studies. We provide applications to the Cohorts for Heart and Aging Research in Genomic Epidemiology Targeted Sequencing Study and the National Heart, Lung, and Blood Institute Exome Sequencing Project.

  8. [Medical and ecological assessment of climate effects on urolithiasis morbidity in population of Primorsky territory].

    PubMed

    Koval'chuk, V K

    2004-01-01

    The article presents medicoecological estimation of quantitative relations between monsoon climate and urolithiasis primary morbidity in the Primorsky Territory. Quantitative estimation of the climate was performed by V. I. Rusanov (1973) who calculated daily meteorological data for 1 p.m. throughout 1991-1999. Primary urolithiasis morbidity for this period of time was provided by regional health department. The data were processed by methods of medical mapping and paired correlation analysis. In the Territory, mapping revealed the same location of the zones with high frequency of discomfortable weather of class V and VI causing chilblain in positive air temperatures and zones with elevated primary urolithiasis morbidity in children and adults. Correlation analysis confirmed mapping results and determined significant negative correlations between frequency of relatively comfortable moment weather classes II-IV and morbidity of children and adults, positive correlation between frequency of discomfortable class VI and adult morbidity. Thus, high frequency of days per year with discomfortable classes of moment weather in low positive air temperatures may be one of the factors of urolithiasis risk in population of the Primorsky Territory. Climatic factors should be taken into consideration in planning primary prophylaxis of this disease in the Primorsky Territory.

  9. A novel Raman spectrophotometric method for quantitative measurement of nucleoside triphosphate hydrolysis.

    PubMed

    Jenkins, R H; Tuma, R; Juuti, J T; Bamford, D H; Thomas, G J

    1999-01-01

    A novel spectrophotometric method, based upon Raman spectroscopy, has been developed for accurate quantitative determination of nucleoside triphosphate phosphohydrolase (NTPase) activity. The method relies upon simultaneous measurement in real time of the intensities of Raman marker bands diagnostic of the triphosphate (1115 cm(-1)) and diphosphate (1085 cm(-1)) moieties of the NTPase substrate and product, respectively. The reliability of the method is demonstrated for the NTPase-active RNA-packaging enzyme (protein P4) of bacteriophage phi6, for which comparative NTPase activities have been estimated independently by radiolabeling assays. The Raman-determined rate for adenosine triphosphate substrate (8.6 +/- 1.3 micromol x mg(-1) x min(-1) at 40 degrees C) is in good agreement with previous estimates. The versatility of the Raman method is demonstrated by its applicability to a variety of nucleotide substrates of P4, including the natural ribonucleoside triphosphates (ATP, GTP) and dideoxynucleoside triphosphates (ddATP, ddGTP). Advantages of the present protocol include conservative sample requirements (approximately 10(-6) g enzyme/protocol) and relative ease of data collection and analysis. The latter conveniences are particularly advantageous for the measurement of activation energies of phosphohydrolase activity.

  10. A method for energy window optimization for quantitative tasks that includes the effects of model-mismatch on bias: application to Y-90 bremsstrahlung SPECT imaging.

    PubMed

    Rong, Xing; Du, Yong; Frey, Eric C

    2012-06-21

    Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.

  11. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    PubMed Central

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  12. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    PubMed

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  13. Connecting the Dots: Linking Environmental Justice Indicators to Daily Dose Model Estimates

    EPA Science Inventory

    Many different quantitative techniques have been developed to either assess Environmental Justice (EJ) issues or estimate exposure and dose for risk assessment. However, very few approaches have been applied to link EJ factors to exposure dose estimate and identify potential impa...

  14. RosettaHoles: rapid assessment of protein core packing for structure prediction, refinement, design, and validation.

    PubMed

    Sheffler, Will; Baker, David

    2009-01-01

    We present a novel method called RosettaHoles for visual and quantitative assessment of underpacking in the protein core. RosettaHoles generates a set of spherical cavity balls that fill the empty volume between atoms in the protein interior. For visualization, the cavity balls are aggregated into contiguous overlapping clusters and small cavities are discarded, leaving an uncluttered representation of the unfilled regions of space in a structure. For quantitative analysis, the cavity ball data are used to estimate the probability of observing a given cavity in a high-resolution crystal structure. RosettaHoles provides excellent discrimination between real and computationally generated structures, is predictive of incorrect regions in models, identifies problematic structures in the Protein Data Bank, and promises to be a useful validation tool for newly solved experimental structures.

  15. RosettaHoles: Rapid assessment of protein core packing for structure prediction, refinement, design, and validation

    PubMed Central

    Sheffler, Will; Baker, David

    2009-01-01

    We present a novel method called RosettaHoles for visual and quantitative assessment of underpacking in the protein core. RosettaHoles generates a set of spherical cavity balls that fill the empty volume between atoms in the protein interior. For visualization, the cavity balls are aggregated into contiguous overlapping clusters and small cavities are discarded, leaving an uncluttered representation of the unfilled regions of space in a structure. For quantitative analysis, the cavity ball data are used to estimate the probability of observing a given cavity in a high-resolution crystal structure. RosettaHoles provides excellent discrimination between real and computationally generated structures, is predictive of incorrect regions in models, identifies problematic structures in the Protein Data Bank, and promises to be a useful validation tool for newly solved experimental structures. PMID:19177366

  16. Density functional study of molecular interactions in secondary structures of proteins.

    PubMed

    Takano, Yu; Kusaka, Ayumi; Nakamura, Haruki

    2016-01-01

    Proteins play diverse and vital roles in biology, which are dominated by their three-dimensional structures. The three-dimensional structure of a protein determines its functions and chemical properties. Protein secondary structures, including α-helices and β-sheets, are key components of the protein architecture. Molecular interactions, in particular hydrogen bonds, play significant roles in the formation of protein secondary structures. Precise and quantitative estimations of these interactions are required to understand the principles underlying the formation of three-dimensional protein structures. In the present study, we have investigated the molecular interactions in α-helices and β-sheets, using ab initio wave function-based methods, the Hartree-Fock method (HF) and the second-order Møller-Plesset perturbation theory (MP2), density functional theory, and molecular mechanics. The characteristic interactions essential for forming the secondary structures are discussed quantitatively.

  17. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Guidelines for a graph-theoretic implementation of structural equation modeling

    USGS Publications Warehouse

    Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William

    2012-01-01

    Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for an updated definition of the SEM process that subsumes the historical matrix approach under a graph-theory implementation. The implementation is also designed to permit complex specifications and to be compatible with various estimation methods. Finally, they are meant to foster the use of probabilistic reasoning in both retrospective and prospective considerations of the quantitative implications of the results.

  19. SU-F-207-16: CT Protocols Optimization Using Model Observer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tseng, H; Fan, J; Kupinski, M

    2015-06-15

    Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settingsmore » are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.

    Across a set of ecological communities connected to each other through organismal dispersal (a ‘meta-community’), turnover in composition is governed by (ecological) Drift, Selection, and Dispersal Limitation. Quantitative estimates of these processes remain elusive, but would represent a common currency needed to unify community ecology. Using a novel analytical framework we quantitatively estimate the relative influences of Drift, Selection, and Dispersal Limitation on subsurface, sediment-associated microbial meta-communities. The communities we study are distributed across two geologic formations encompassing ~12,500m3 of uranium-contaminated sediments within the Hanford Site in eastern Washington State. We find that Drift consistently governs ~25% of spatial turnovermore » in community composition; Selection dominates (governing ~60% of turnover) across spatially-structured habitats associated with fine-grained, low permeability sediments; and Dispersal Limitation is most influential (governing ~40% of turnover) across spatially-unstructured habitats associated with coarse-grained, highly-permeable sediments. Quantitative influences of Selection and Dispersal Limitation may therefore be predictable from knowledge of environmental structure. To develop a system-level conceptual model we extend our analytical framework to compare process estimates across formations, characterize measured and unmeasured environmental variables that impose Selection, and identify abiotic features that limit dispersal. Insights gained here suggest that community ecology can benefit from a shift in perspective; the quantitative approach developed here goes beyond the ‘niche vs. neutral’ dichotomy by moving towards a style of natural history in which estimates of Selection, Dispersal Limitation and Drift can be described, mapped and compared across ecological systems.« less

  1. Bayesian parameter estimation in spectral quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja

    2016-03-01

    Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.

  2. Rapid impact testing for quantitative assessment of large populations of bridges

    NASA Astrophysics Data System (ADS)

    Zhou, Yun; Prader, John; DeVitis, John; Deal, Adrienne; Zhang, Jian; Moon, Franklin; Aktan, A. Emin

    2011-04-01

    Although the widely acknowledged shortcomings of visual inspection have fueled significant advances in the areas of non-destructive evaluation and structural health monitoring (SHM) over the last several decades, the actual practice of bridge assessment has remained largely unchanged. The authors believe the lack of adoption, especially of SHM technologies, is related to the 'single structure' scenarios that drive most research. To overcome this, the authors have developed a concept for a rapid single-input, multiple-output (SIMO) impact testing device that will be capable of capturing modal parameters and estimating flexibility/deflection basins of common highway bridges during routine inspections. The device is composed of a trailer-mounted impact source (capable of delivering a 50 kip impact) and retractable sensor arms, and will be controlled by an automated data acquisition, processing and modal parameter estimation software. The research presented in this paper covers (a) the theoretical basis for SISO, SIMO and MIMO impact testing to estimate flexibility, (b) proof of concept numerical studies using a finite element model, and (c) a pilot implementation on an operating highway bridge. Results indicate that the proposed approach can estimate modal flexibility within a few percent of static flexibility; however, the estimated modal flexibility matrix is only reliable for the substructures associated with the various SIMO tests. To overcome this shortcoming, a modal 'stitching' approach for substructure integration to estimate the full Eigen vector matrix is developed, and preliminary results of these methods are also presented.

  3. Defining Tsunami Magnitude as Measure of Potential Impact

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Tang, L.

    2016-12-01

    The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.

  4. HEALTH AND ENVIRONMENTAL EFFECTS DOCUMENT ...

    EPA Pesticide Factsheets

    Health and Environmental Effects Documents (HEEDS) are prepared for the Office of Solid Waste and Emergency Response (OSWER). This document series is intended to support listings under the Resource Conservation and Recovery Act (RCRA) as well as to provide health-related limits and goals for emergency and remedial actions under the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA). Both published literature and information obtained from Agency Program Office files are evaluated as they pertain to potential human health, aquatic life and environmental effects of hazardous waste constituents. Several quantitative estimates are presented provided sufficient data are available. For systemic toxicants, these include Reference Doses (RfDs) for chronic and subchronic exposures for both the inhalation and oral exposures. In the case of suspected carcinogens, RfDs may not be estimated. Instead, a carcinogenic potency factor, or q1*, is provided. These potency estimates are derived for both oral and inhalation exposures where possible. In addition, unit risk estimates for air and drinking water are presented based on inhalation and oral data, respectively. Reportable quantities (RQs) based on both chronic toxicity and carcinogenicity are derived. The RQ is used to determine the quantity of a hazardous substance for which notification is required in the event of a release as specified under CERCLA.

  5. Methods for Derivation of Inhalation Reference Concentrations and Application of Inhalation Dosimetry

    EPA Pesticide Factsheets

    EPA's methodology for estimation of inhalation reference concentrations (RfCs) as benchmark estimates of the quantitative dose-response assessment of chronic noncancer toxicity for individual inhaled chemicals.

  6. 50 CFR 600.340 - National Standard 7-Costs and Benefits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...

  7. 50 CFR 600.340 - National Standard 7-Costs and Benefits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...

  8. 50 CFR 600.340 - National Standard 7-Costs and Benefits.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...

  9. 50 CFR 600.340 - National Standard 7-Costs and Benefits.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...

  10. 50 CFR 600.340 - National Standard 7-Costs and Benefits.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...

  11. Measuring Aircraft Capability for Military and Political Analysis

    DTIC Science & Technology

    1976-03-01

    challenged in 1932 when a panel of distinguished British scientists discussed the feasibility of quantitatively estimating sensory events... Quantitative Analysis of Social Problems , E.R. Tufte (ed.), p. 407, Addison-Wesley, 1970. 17 "artificial" boundaries are imposed on the data. Less...of arms transfers in various parts of the world as well. Quantitative research (and hence measurement) contributes to theoretical development by

  12. Development of retrospective quantitative and qualitative job-exposure matrices for exposures at a beryllium processing facility.

    PubMed

    Couch, James R; Petersen, Martin; Rice, Carol; Schubauer-Berigan, Mary K

    2011-05-01

    To construct a job-exposure matrix (JEM) for an Ohio beryllium processing facility between 1953 and 2006 and to evaluate temporal changes in airborne beryllium exposures. Quantitative area- and breathing-zone-based exposure measurements of airborne beryllium were made between 1953 and 2006 and used by plant personnel to estimate daily weighted average (DWA) exposure concentrations for sampled departments and operations. These DWA measurements were used to create a JEM with 18 exposure metrics, which was linked to the plant cohort consisting of 18,568 unique job, department and year combinations. The exposure metrics ranged from quantitative metrics (annual arithmetic/geometric average DWA exposures, maximum DWA and peak exposures) to descriptive qualitative metrics (chemical beryllium species and physical form) to qualitative assignment of exposure to other risk factors (yes/no). Twelve collapsed job titles with long-term consistent industrial hygiene samples were evaluated using regression analysis for time trends in DWA estimates. Annual arithmetic mean DWA estimates (overall plant-wide exposures including administration, non-production, and production estimates) for the data by decade ranged from a high of 1.39 μg/m(3) in the 1950s to a low of 0.33 μg/m(3) in the 2000s. Of the 12 jobs evaluated for temporal trend, the average arithmetic DWA mean was 2.46 μg/m(3) and the average geometric mean DWA was 1.53 μg/m(3). After the DWA calculations were log-transformed, 11 of the 12 had a statistically significant (p < 0.05) decrease in reported exposure over time. The constructed JEM successfully differentiated beryllium exposures across jobs and over time. This is the only quantitative JEM containing exposure estimates (average and peak) for the entire plant history.

  13. Systematic Anomalies in Rainfall Intensity Estimates Over the Continental U.S.

    NASA Technical Reports Server (NTRS)

    Amitai, Eyal; Petersen, Walter Arthur; Llort, Xavier; Vasiloff, Steve

    2010-01-01

    Rainfall intensities during extreme events over the continental U.S. are compared for several advanced radar products. These products include: 1) TRMM spaceborne radar (PR) near surface estimates; 2) NOAA Next-Generation Quantitative Precipitation Estimation (QPE) very high-resolution (1 km) radar-only national mosaics (Q2); 3) very high-resolution instantaneous gauge adjusted radar national mosaics, which we have developed by applying gauge correction on the Q2 instantaneous radar-only products; and 4) several independent C-band dual-polarimetric radar-estimated rainfall samples collected with the ARMOR radar in northern Alabama. Though accumulated rainfall amounts are often similar, we find the satellite and the ground radar rain rate pdfs to be quite different. PR pdfs are shifted towards lower rain rates, implying a much larger stratiform/convective rain ratio than do ground radar products. The shift becomes more evident during strong continental convective storms and much less during tropical storms. Resolving the continental/maritime regime behavior and other large discrepancies between the products presents an important challenge. A challenge to improve our understanding of the source of the discrepancies, to determine the uncertainties of the estimates, and to improve remote-sensing estimates of precipitation in general.

  14. Mass properties survey of solar array technologies

    NASA Technical Reports Server (NTRS)

    Kraus, Robert

    1991-01-01

    An overview of the technologies, electrical performance, and mass characteristics of many of the presently available and the more advanced developmental space solar array technologies is presented. Qualitative trends and quantitative mass estimates as total array output power is increased from 1 kW to 5 kW at End of Life (EOL) from a single wing are shown. The array technologies are part of a database supporting an ongoing solar power subsystem model development for top level subsystem and technology analyses. The model is used to estimate the overall electrical and thermal performance of the complete subsystem, and then calculate the mass and volume of the array, batteries, power management, and thermal control elements as an initial sizing. The array types considered here include planar rigid panel designs, flexible and rigid fold-out planar arrays, and two concentrator designs, one with one critical axis and the other with two critical axes. Solar cell technologies of Si, GaAs, and InP were included in the analyses. Comparisons were made at the array level; hinges, booms, harnesses, support structures, power transfer, and launch retention mountings were included. It is important to note that the results presented are approximations, and in some cases revised or modified performance and mass estimates of specific designs.

  15. Lower reference limits of quantitative cord glucose-6-phosphate dehydrogenase estimated from healthy term neonates according to the clinical and laboratory standards institute guidelines: a cross sectional retrospective study

    PubMed Central

    2013-01-01

    Background Previous studies have reported the lower reference limit (LRL) of quantitative cord glucose-6-phosphate dehydrogenase (G6PD), but they have not used approved international statistical methodology. Using common standards is expecting to yield more true findings. Therefore, we aimed to estimate LRL of quantitative G6PD detection in healthy term neonates by using statistical analyses endorsed by the International Federation of Clinical Chemistry (IFCC) and the Clinical and Laboratory Standards Institute (CLSI) for reference interval estimation. Methods This cross sectional retrospective study was performed at King Abdulaziz Hospital, Saudi Arabia, between March 2010 and June 2012. The study monitored consecutive neonates born to mothers from one Arab Muslim tribe that was assumed to have a low prevalence of G6PD-deficiency. Neonates that satisfied the following criteria were included: full-term birth (37 weeks); no admission to the special care nursery; no phototherapy treatment; negative direct antiglobulin test; and fathers of female neonates were from the same mothers’ tribe. The G6PD activity (Units/gram Hemoglobin) was measured spectrophotometrically by an automated kit. This study used statistical analyses endorsed by IFCC and CLSI for reference interval estimation. The 2.5th percentiles and the corresponding 95% confidence intervals (CI) were estimated as LRLs, both in presence and absence of outliers. Results 207 males and 188 females term neonates who had cord blood quantitative G6PD testing met the inclusion criteria. Method of Horn detected 20 G6PD values as outliers (8 males and 12 females). Distributions of quantitative cord G6PD values exhibited a normal distribution in absence of the outliers only. The Harris-Boyd method and proportion criteria revealed that combined gender LRLs were reliable. The combined bootstrap LRL in presence of the outliers was 10.0 (95% CI: 7.5-10.7) and the combined parametric LRL in absence of the outliers was 11.0 (95% CI: 10.5-11.3). Conclusion These results contribute to the LRL of quantitative cord G6PD detection in full-term neonates. They are transferable to another laboratory when pre-analytical factors and testing methods are comparable and the IFCC-CLSI requirements of transference are satisfied. We are suggesting using estimated LRL in absence of the outliers as mislabeling G6PD-deficient neonates as normal is intolerable whereas mislabeling G6PD-normal neonates as deficient is tolerable. PMID:24016342

  16. Development of a Multi-Point Quantitation Method to Simultaneously Measure Enzymatic and Structural Components of the Clostridium thermocellum Cellulosome Protein Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Andrew B; St. Brice, Lois; Rodriguez, Jr., Miguel

    2014-01-01

    Clostridium thermocellum has emerged as a leading bioenergy-relevant microbe due to its ability to solubilize cellulose into carbohydrates, mediated by multi-component membrane-attached complexes termed cellulosomes. To probe microbial cellulose utilization rates, it is desirable to be able to measure the concentrations of saccharolytic enzymes and estimate the total amount of cellulosome present on a mass basis. Current cellulase determination methodologies involve labor-intensive purification procedures and only allow for indirect determination of abundance. We have developed a method using multiple reaction monitoring (MRM-MS) to simultaneously quantitate both enzymatic and structural components of the cellulosome protein complex in samples ranging in complexitymore » from purified cellulosomes to whole cell lysates, as an alternative to a previously-developed enzyme-linked immunosorbent assay (ELISA) method of cellulosome quantitation. The precision of the cellulosome mass concentration in technical replicates is better than 5% relative standard deviation for all samples, indicating high precision for determination of the mass concentration of cellulosome components.« less

  17. A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.

  18. A Novel Method for Tracking Individuals of Fruit Fly Swarms Flying in a Laboratory Flight Arena.

    PubMed

    Cheng, Xi En; Qian, Zhi-Ming; Wang, Shuo Hong; Jiang, Nan; Guo, Aike; Chen, Yan Qiu

    2015-01-01

    The growing interest in studying social behaviours of swarming fruit flies, Drosophila melanogaster, has heightened the need for developing tools that provide quantitative motion data. To achieve such a goal, multi-camera three-dimensional tracking technology is the key experimental gateway. We have developed a novel tracking system for tracking hundreds of fruit flies flying in a confined cubic flight arena. In addition to the proposed tracking algorithm, this work offers additional contributions in three aspects: body detection, orientation estimation, and data validation. To demonstrate the opportunities that the proposed system offers for generating high-throughput quantitative motion data, we conducted experiments on five experimental configurations. We also performed quantitative analysis on the kinematics and the spatial structure and the motion patterns of fruit fly swarms. We found that there exists an asymptotic distance between fruit flies in swarms as the population density increases. Further, we discovered the evidence for repulsive response when the distance between fruit flies approached the asymptotic distance. Overall, the proposed tracking system presents a powerful method for studying flight behaviours of fruit flies in a three-dimensional environment.

  19. Robust estimation of adaptive tensors of curvature by tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  20. Reference-material system for estimating health and environmental risks of selected material cycles and energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowther, M.A.; Moskowitz, P.D.

    1981-07-01

    Sample analyses and detailed documentation are presented for a Reference Material System (RMS) to estimate health and environmental risks of different material cycles and energy systems. Data inputs described include: end-use material demands, efficiency coefficients, environmental emission coefficients, fuel demand coefficients, labor productivity estimates, and occupational health and safety coefficients. Application of this model permits analysts to estimate fuel use (e.g., Btu), occupational risk (e.g., fatalities), and environmental emissions (e.g., sulfur oxide) for specific material trajectories or complete energy systems. Model uncertainty is quantitatively defined by presenting a range of estimates for each data input. Systematic uncertainty not quantified relatesmore » to the boundaries chosen for analysis and reference system specification. Although the RMS can be used to analyze material system impacts for many different energy technologies, it was specifically used to examine the health and environmental risks of producing the following four types of photovoltaic devices: silicon n/p single-crystal cells produced by a Czochralski process; silicon metal/insulator/semiconductor (MIS) cells produced by a ribbon-growing process; cadmium sulfide/copper sulfide backwall cells produced by a spray deposition process; and gallium arsenide cells with 500X concentrator produced by a modified Czochralski process. Emission coefficients for particulates, sulfur dioxide and nitrogen dioxide; solid waste; total suspended solids in water; and, where applicable, air and solid waste residuals for arsenic, cadmium, gallium, and silicon are examined and presented. Where data are available the coefficients for particulates, sulfur oxides, and nitrogen oxides include both process and on-site fuel-burning emissions.« less

  1. Fate of Earth Microbes on Mars: UV Radiation Effects

    NASA Technical Reports Server (NTRS)

    Cockell, Charles

    2000-01-01

    A radiative transfer model is used to quantitatively investigate aspects of the martian ultraviolet radiation environment. Biological action spectra for DNA inactivation are used to estimate biologically effective irradiances for the martian surface under cloudless skies. Although the present-day martian UV flux is similar to early earth and thus may not be a limitation to life in the evolutionary context, it is a constraint to an unadapted biota and will rapidly kill spacecraft-borne microbes not covered by a martian dust layer. Here calculations for loss of microbial viability on the Pathfinder and Polar lander spacecraft are presented and the effects of martian dust on loss of viability are discussed. Details of the radiative transfer model are presented.

  2. Fate of Earth Microbes on Mars -- UV Radiation Effects

    NASA Technical Reports Server (NTRS)

    Cockell, Charles

    2000-01-01

    A radiative transfer model is used to quantitatively investigate aspects of the martian ultraviolet radiation environment. Biological action spectra for DNA inactivation are used to estimate biologically effective irradiances for the martian surface under cloudless skies. Although the present-day martian UV flux is similar to early earth and thus may not be a limitation to life in the evolutionary context, it is a constraint to an unadapted biota and will rapidly kill spacecraft-borne microbes not covered by a martian dust layer. Here calculations for loss of microbial viability on the Pathfinder and Polar lander spacecraft are presented and the effects of martian dust on loss of viability are discussed. Details of the radiative transfer model are presented.

  3. Estimation of Qualitative and Quantitative Parameters of Air Cleaning by a Pulsed Corona Discharge Using Multicomponent Standard Mixtures

    NASA Astrophysics Data System (ADS)

    Filatov, I. E.; Uvarin, V. V.; Kuznetsov, D. L.

    2018-05-01

    The efficiency of removal of volatile organic impurities in air by a pulsed corona discharge is investigated using model mixtures. Based on the method of competing reactions, an approach to estimating the qualitative and quantitative parameters of the employed electrophysical technique is proposed. The concept of the "toluene coefficient" characterizing the relative reactivity of a component as compared to toluene is introduced. It is proposed that the energy efficiency of the electrophysical method be estimated using the concept of diversified yield of the removal process. Such an approach makes it possible to substantially intensify the determination of energy parameters of removal of impurities and can also serve as a criterion for estimating the effectiveness of various methods in which a nonequilibrium plasma is used for air cleaning from volatile impurities.

  4. QFASAR: Quantitative fatty acid signature analysis with R

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2017-01-01

    Knowledge of predator diets provides essential insights into their ecology, yet diet estimation is challenging and remains an active area of research.Quantitative fatty acid signature analysis (QFASA) is a popular method of estimating diet composition that continues to be investigated and extended. However, software to implement QFASA has only recently become publicly available.I summarize a new R package, qfasar, for diet estimation using QFASA methods. The package also provides functionality to evaluate and potentially improve the performance of a library of prey signature data, compute goodness-of-fit diagnostics, and support simulation-based research. Several procedures in the package have not previously been published.qfasar makes traditional and recently published QFASA diet estimation methods accessible to ecologists for the first time. Use of the package is illustrated with signature data from Chukchi Sea polar bears and potential prey species.

  5. Assessing the performance of the generalized propensity score for estimating the effect of quantitative or continuous exposures on survival or time-to-event outcomes.

    PubMed

    Austin, Peter C

    2018-01-01

    Propensity score methods are frequently used to estimate the effects of interventions using observational data. The propensity score was originally developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (e.g. pack-years of cigarettes smoked, dose of medication, or years of education). We describe how the GPS can be used to estimate the effect of continuous exposures on survival or time-to-event outcomes. To do so we modified the concept of the dose-response function for use with time-to-event outcomes. We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of quantitative exposures on survival or time-to-event outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. The use of methods based on the GPS was compared with the use of conventional G-computation and weighted G-computation. Conventional G-computation resulted in estimates of the dose-response function that displayed the lowest bias and the lowest variability. Amongst the two GPS-based methods, covariate adjustment using the GPS tended to have the better performance. We illustrate the application of these methods by estimating the effect of average neighbourhood income on the probability of survival following hospitalization for an acute myocardial infarction.

  6. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    PubMed

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  7. Characterization of articular cartilage by combining microscopic analysis with a fibril-reinforced finite-element model.

    PubMed

    Julkunen, Petro; Kiviranta, Panu; Wilson, Wouter; Jurvelin, Jukka S; Korhonen, Rami K

    2007-01-01

    Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.

  8. Quantitative analysis of 18F-NaF dynamic PET/CT cannot differentiate malignant from benign lesions in multiple myeloma.

    PubMed

    Sachpekidis, Christos; Hillengass, Jens; Goldschmidt, Hartmut; Anwar, Hoda; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-01-01

    A renewed interest has been recently developed for the highly sensitive bone-seeking radiopharmaceutical 18 F-NaF. Aim of the present study is to evaluate the potential utility of quantitative analysis of 18 F-NaF dynamic PET/CT data in differentiating malignant from benign degenerative lesions in multiple myeloma (MM). 80 MM patients underwent whole-body PET/CT and dynamic PET/CT scanning of the pelvis with 18 F-NaF. PET/CT data evaluation was based on visual (qualitative) assessment, semi-quantitative (SUV) calculations, and absolute quantitative estimations after application of a 2-tissue compartment model and a non-compartmental approach leading to the extraction of fractal dimension (FD). In total 263 MM lesions were demonstrated on 18 F-NaF PET/CT. Semi-quantitative and quantitative evaluations were performed for 25 MM lesions as well as for 25 benign, degenerative and traumatic lesions. Mean SUV average for MM lesions was 11.9 and mean SUV max was 23.2. Respectively, SUV average and SUV max for degenerative lesions were 13.5 and 20.2. Kinetic analysis of 18 F-NaF revealed the following mean values for MM lesions: K 1 = 0.248 (1/min), k 3 = 0.359 (1/min), influx (K i ) = 0.107 (1/min), FD = 1.382, while the respective values for degenerative lesions were: K 1 = 0.169 (1/min), k 3 = 0.422 (1/min), influx (K i ) = 0.095 (1/min), FD = 1. 411. No statistically significant differences between MM and benign degenerative disease regarding SUV average , SUV max , K 1 , k 3 and influx (K i ) were demonstrated. FD was significantly higher in degenerative than in malignant lesions. The present findings show that quantitative analysis of 18 F-NaF PET data cannot differentiate malignant from benign degenerative lesions in MM patients, supporting previously published results, which reflect the limited role of 18 F-NaF PET/CT in the diagnostic workup of MM.

  9. The Quantitative-MFG Test: A Linear Mixed Effect Model to Detect Maternal-Offspring Gene Interactions.

    PubMed

    Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S

    2016-01-01

    Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.

  10. Power Analysis of Artificial Selection Experiments Using Efficient Whole Genome Simulation of Quantitative Traits

    PubMed Central

    Kessner, Darren; Novembre, John

    2015-01-01

    Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50–100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates. PMID:25672748

  11. Insights into Spray Development from Metered-Dose Inhalers Through Quantitative X-ray Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason-Smith, Nicholas; Duke, Daniel J.; Kastengren, Alan L.

    Typical methods to study pMDI sprays employ particle sizing or visible light diagnostics, which suffer in regions of high spray density. X-ray techniques can be applied to pharmaceutical sprays to obtain information unattainable by conventional particle sizing and light-based techniques. We present a technique for obtaining quantitative measurements of spray density in pMDI sprays. A monochromatic focused X-ray beam was used to perform quantitative radiography measurements in the near-nozzle region and plume of HFA-propelled sprays. Measurements were obtained with a temporal resolution of 0.184 ms and spatial resolution of 5 mu m. Steady flow conditions were reached after around 30more » ms for the formulations examined with the spray device used. Spray evolution was affected by the inclusion of ethanol in the formulation and unaffected by the inclusion of 0.1% drug by weight. Estimation of the nozzle exit density showed that vapour is likely to dominate the flow leaving the inhaler nozzle during steady flow. Quantitative measurements in pMDI sprays allow the determination of nozzle exit conditions that are difficult to obtain experimentally by other means. Measurements of these nozzle exit conditions can improve understanding of the atomization mechanisms responsible for pMDI spray droplet and particle formation.« less

  12. Quantitative Assessment of CYP2C9 Genetic Polymorphisms Effect on the Oral Clearance of S-Warfarin in Healthy Subjects.

    PubMed

    Shaul, Chanan; Blotnick, Simcha; Muszkat, Mordechai; Bialer, Meir; Caraco, Yoseph

    2017-02-01

    Genetic polymorphisms in CYP2C9 account for 10-20% of the variability in warfarin dose requirement. As such CYP2C9 genetic polymorphisms are commonly included in algorithms aimed to optimize warfarin therapy as a way to account for variability in warfarin responsiveness that is due to altered pharmacokinetics. However, most of the currently available pharmacokinetic data were derived from studies among patients on chronic warfarin therapy and therefore suffer from the confounding effects of disease states and drug interactions. The purpose of the present study was to provide an accurate quantitative estimate of S-warfarin oral clearance (CL S ) among healthy subjects carrying different CYP2C9 genotypes. Single dose of warfarin was administered to 150 non-smokers, age (mean ± SD) 23.3 ± 4.5 years, 60% male, non-obese, healthy subjects. Blood samples were taken for up to 168 h and urine was collected over the entire study period. Compared with carriers of the wild-type CYP2C9*1/*1 genotype (n = 69), CL S was reduced by 25, 39 and 47% among heterozygote for CYP2C9*2 (n = 41) CYP2C9*3 (n = 26) and carriers of 2 variant alleles (n = 14), respectively (p < 0.001). The corresponding decrease in the formation clearance of 6 and 7 S-hydroxy-warfarin was 45, 65 and 75%, respectively (p < 0.001). The current study provides an estimate concerning the effect of CYP2C9 polymorphisms on S-warfarin pharmacokinetics among healthy subjects. As such it is free of the confounding effects of disease states and drug interactions. Further research is needed to evaluate whether the incorporation of quantitative data obtained in the present study into pharmacogenetic warfarin algorithm may enhance its precision. Clinicaltrials.gov Identifier NCT00162474.

  13. Estrogens in seminal plasma of human and animal species: identification and quantitative estimation by gas chromatography-mass spectrometry associated with stable isotope dilution.

    PubMed

    Reiffsteck, A; Dehennin, L; Scholler, R

    1982-11-01

    Estrone, 2-methoxyestrone and estradiol-17 beta have been definitely identified in seminal plasma of man, bull, boar and stallion by high resolution gas chromatography associated with selective monitoring of characteristic ions of suitable derivatives. Quantitative estimations were performed by isotope dilution with deuterated analogues and by monitoring molecular ions of trimethylsilyl ethers of labelled and unlabelled compounds. Concentrations of unconjugated and total estrogens are reported together with the statistical evaluation of accuracy and precision.

  14. Estimated stocks of circumpolar permafrost carbon with quantified uncertainty ranges and identified data gaps

    DOE PAGES

    Hugelius, Gustaf; Strauss, J.; Zubrzycki, S.; ...

    2014-12-01

    Soils and other unconsolidated deposits in the northern circumpolar permafrost region store large amounts of soil organic carbon (SOC). This SOC is potentially vulnerable to remobilization following soil warming and permafrost thaw, but SOC stock estimates were poorly constrained and quantitative error estimates were lacking. This study presents revised estimates of permafrost SOC stocks, including quantitative uncertainty estimates, in the 0–3 m depth range in soils as well as for sediments deeper than 3 m in deltaic deposits of major rivers and in the Yedoma region of Siberia and Alaska. Revised estimates are based on significantly larger databases compared tomore » previous studies. Despite this there is evidence of significant remaining regional data gaps. Estimates remain particularly poorly constrained for soils in the High Arctic region and physiographic regions with thin sedimentary overburden (mountains, highlands and plateaus) as well as for deposits below 3 m depth in deltas and the Yedoma region. While some components of the revised SOC stocks are similar in magnitude to those previously reported for this region, there are substantial differences in other components, including the fraction of perennially frozen SOC. Upscaled based on regional soil maps, estimated permafrost region SOC stocks are 217 ± 12 and 472 ± 27 Pg for the 0–0.3 and 0–1 m soil depths, respectively (±95% confidence intervals). Storage of SOC in 0–3 m of soils is estimated to 1035 ± 150 Pg. Of this, 34 ± 16 Pg C is stored in poorly developed soils of the High Arctic. Based on generalized calculations, storage of SOC below 3 m of surface soils in deltaic alluvium of major Arctic rivers is estimated as 91 ± 52 Pg. In the Yedoma region, estimated SOC stocks below 3 m depth are 181 ± 54 Pg, of which 74 ± 20 Pg is stored in intact Yedoma (late Pleistocene ice- and organic-rich silty sediments) with the remainder in refrozen thermokarst deposits. Total estimated SOC storage for the permafrost region is ∼1300 Pg with an uncertainty range of ∼1100 to 1500 Pg. Of this, ∼500 Pg is in non-permafrost soils, seasonally thawed in the active layer or in deeper taliks, while ∼800 Pg is perennially frozen. In conclusion, this represents a substantial ∼300 Pg lowering of the estimated perennially frozen SOC stock compared to previous estimates.« less

  15. A quantitative microbial risk assessment for center pivot irrigation of dairy wastewaters

    USDA-ARS?s Scientific Manuscript database

    In the western United States where livestock wastewaters are commonly land applied, there are concerns over individuals being exposed to airborne pathogens. In response, a quantitative microbial risk assessment (QMRA) was performed to estimate infectious risks from inhaling pathogens aerosolized dur...

  16. 76 FR 50904 - Thiamethoxam; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... exposure and risk. A separate assessment was done for clothianidin. i. Acute exposure. Quantitative acute... not expected to pose a cancer risk, a quantitative dietary exposure assessment for the purposes of...-dietary sources of post application exposure to obtain an estimate of potential combined exposure. These...

  17. Bound Pool Fractions Complement Diffusion Measures to Describe White Matter Micro and Macrostructure

    PubMed Central

    Stikov, Nikola; Perry, Lee M.; Mezer, Aviv; Rykhlevskaia, Elena; Wandell, Brian A.; Pauly, John M.; Dougherty, Robert F.

    2010-01-01

    Diffusion imaging and bound pool fraction (BPF) mapping are two quantitative magnetic resonance imaging techniques that measure microstructural features of the white matter of the brain. Diffusion imaging provides a quantitative measure of the diffusivity of water in tissue. BPF mapping is a quantitative magnetization transfer (qMT) technique that estimates the proportion of exchanging protons bound to macromolecules, such as those found in myelin, and is thus a more direct measure of myelin content than diffusion. In this work, we combine BPF estimates of macromolecular content with measurements of diffusivity within human white matter tracts. Within the white matter, the correlation between BPFs and diffusivity measures such as fractional anisotropy and radial diffusivity was modest, suggesting that diffusion tensor imaging and bound pool fractions are complementary techniques. We found that several major tracts have high BPF, suggesting a higher density of myelin in these tracts. We interpret these results in the context of a quantitative tissue model. PMID:20828622

  18. Re-assessing the relationship between sporozoite dose and incubation period in Plasmodium vivax malaria: a systematic re-analysis.

    PubMed

    Lover, Andrew A; Coker, Richard J

    2014-05-01

    Infections with the malaria parasite Plasmodium vivax are noteworthy for potentially very long incubation periods (6-9 months), which present a major barrier to disease elimination. Increased sporozoite challenge has been reported to be associated with both shorter incubation and pre-patent periods in a range of human challenge studies. However, this evidence base has scant empirical foundation, as these historical analyses were limited by available analytic methods, and provides no quantitative estimates of effect size. Following a comprehensive literature search, we re-analysed all identified studies using survival and/or logistic models plus contingency tables. We have found very weak evidence for dose-dependence at entomologically plausible inocula levels. These results strongly suggest that sporozoite dosage is not an important driver of long-latency. Evidence presented suggests that parasite strain and vector species have quantitatively greater impacts, and the potential existence of a dose threshold for human dose-response to sporozoites. Greater consideration of the complex interplay between these aspects of vectors and parasites are important for human challenge experiments, vaccine trials, and epidemiology towards global malaria elimination.

  19. Development of a new quantitative gas permeability method for dental implant-abutment connection tightness assessment

    PubMed Central

    2011-01-01

    Background Most dental implant systems are presently made of two pieces: the implant itself and the abutment. The connection tightness between those two pieces is a key point to prevent bacterial proliferation, tissue inflammation and bone loss. The leak has been previously estimated by microbial, color tracer and endotoxin percolation. Methods A new nitrogen flow technique was developed for implant-abutment connection leakage measurement, adapted from a recent, sensitive, reproducible and quantitative method used to assess endodontic sealing. Results The results show very significant differences between various sealing and screwing conditions. The remaining flow was lower after key screwing compared to hand screwing (p = 0.03) and remained different from the negative test (p = 0.0004). The method reproducibility was very good, with a coefficient of variation of 1.29%. Conclusions Therefore, the presented new gas flow method appears to be a simple and robust method to compare different implant systems. It allows successive measures without disconnecting the abutment from the implant and should in particular be used to assess the behavior of the connection before and after mechanical stress. PMID:21492459

  20. Quantifying the chemical composition of soil organic carbon with solid-state 13C NMR

    NASA Astrophysics Data System (ADS)

    Baldock, J. A.; Sanderman, J.

    2011-12-01

    The vulnerability of soil organic carbon (SOC) to biological decomposition and mineralisation to CO2 is defined at least partially by its chemical composition. Highly aromatic charcoal-like SOC components are more stable to biological decomposition than other forms of carbon including cellulose. Solid-state 13C NMR has gained wide acceptance as a method capable of defining SOC chemical composition and mathematical fitting processes have been developed to estimate biochemical composition. Obtaining accurate estimates depends on an ability to quantitatively detect all carbon present in a sample. Often little attention has been paid to defining the proportion of organic carbon present in a soil that is observable in solid-state 13C NMR analyses of soil samples. However, if such data is to be used to inform carbon cycling studies, it is critical that quantitative assessments of SOC observability be undertaken. For example, it is now well established that a significant discrimination exists against the detection of the low proton content polyaromatic structures typical of charcoal using cross polarisation 13C NMR analyses. Such discrimination does not exist where direct polarisation analyses are completed. In this study, the chemical composition of SOC as defined by cross polarisation and direct polarisation13C NMR analyses will be compared for Australian soils collected from under a diverse range of agricultural managements and climatic conditions. Results indicate that where significant charcoal C contents exist, it is highly under-represented in the acquired CP spectra. For some soils, a discrimination against alkyl carbon was also evident. The ability to derive correction factors to compensate for such discriminations will be assessed and presented.

  1. Relation between modern pollen rain, vegetation and climate in northern China: Implications for quantitative vegetation reconstruction in a steppe environment.

    PubMed

    Ge, Yawen; Li, Yuecong; Bunting, M Jane; Li, Bing; Li, Zetao; Wang, Junting

    2017-05-15

    Vegetation reconstructions from palaeoecological records depend on adequate understanding of relationships between modern pollen, vegetation and climate. A key parameter for quantitative vegetation reconstructions is the Relative Pollen Productivity (RPP). Differences in both environmental and methodological factors are known to alter the RPP estimated significantly, making it difficult to determine whether the underlying pollen productivity does actually vary, and if so, why. In this paper, we present the results of a replication study for the Bashang steppe region, a typical steppe area in northern China, carried out in 2013 and 2014. In each year, 30 surface samples were collected for pollen analysis, with accompanying vegetation survey using the "Crackles Bequest Project" methodology. Sampling designs differed slightly between the two years: in 2013, sites were located completely randomly, whilst in 2014 sampling locations were constrained to be within a few km of roads. There is a strong inter-annual variability in both the pollen and the vegetation spectra therefore in RPPs, and annual precipitation may be a key influence on these variations. The pollen assemblages in both years are dominated by herbaceous taxa such as Artemisia, Amaranthaceae, Poaceae, Asteraceae, Cyperaceae, Fabaceae and Allium. Artemisia and Amaranthaceae pollen are significantly over-represented for their vegetation abundance. Poaceae, Cyperaceae and Fabaceae seem to have under-represented pollen for vegetation with correspondingly lower RPPs. Asteraceae seems to be well-represented, with moderate RPPs and less annual variation. Estimated Relevant Source Area of Pollen (RSAP) ranges from 2000 to 3000m. Different sampling designs have an effect both on RSAP and RPPs and random sample selection may be the best strategy for obtaining robust estimates. Our results have implications for further pollen-vegetation relationship and quantitative vegetation reconstruction research in typical steppe areas and in other open habitats with strong inter-annual variation. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Systematic feasibility analysis of a quantitative elasticity estimation for breast anatomy using supine/prone patient postures.

    PubMed

    Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P

    2016-03-01

    Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.

  3. Quantitative Susceptibility Mapping after Sports-Related Concussion.

    PubMed

    Koch, K M; Meier, T B; Karr, R; Nencka, A S; Muftuler, L T; McCrea, M

    2018-06-07

    Quantitative susceptibility mapping using MR imaging can assess changes in brain tissue structure and composition. This report presents preliminary results demonstrating changes in tissue magnetic susceptibility after sports-related concussion. Longitudinal quantitative susceptibility mapping metrics were produced from imaging data acquired from cohorts of concussed and control football athletes. One hundred thirty-six quantitative susceptibility mapping datasets were analyzed across 3 separate visits (24 hours after injury, 8 days postinjury, and 6 months postinjury). Longitudinal quantitative susceptibility mapping group analyses were performed on stability-thresholded brain tissue compartments and selected subregions. Clinical concussion metrics were also measured longitudinally in both cohorts and compared with the measured quantitative susceptibility mapping. Statistically significant increases in white matter susceptibility were identified in the concussed athlete group during the acute (24 hour) and subacute (day 8) period. These effects were most prominent at the 8-day visit but recovered and showed no significant difference from controls at the 6-month visit. The subcortical gray matter showed no statistically significant group differences. Observed susceptibility changes after concussion appeared to outlast self-reported clinical recovery metrics at a group level. At an individual subject level, susceptibility increases within the white matter showed statistically significant correlations with return-to-play durations. The results of this preliminary investigation suggest that sports-related concussion can induce physiologic changes to brain tissue that can be detected using MR imaging-based magnetic susceptibility estimates. In group analyses, the observed tissue changes appear to persist beyond those detected on clinical outcome assessments and were associated with return-to-play duration after sports-related concussion. © 2018 by American Journal of Neuroradiology.

  4. Quantitative Estimation of Plasma Free Drug Fraction in Patients With Varying Degrees of Hepatic Impairment: A Methodological Evaluation.

    PubMed

    Li, Guo-Fu; Yu, Guo; Li, Yanfei; Zheng, Yi; Zheng, Qing-Shan; Derendorf, Hartmut

    2018-07-01

    Quantitative prediction of unbound drug fraction (f u ) is essential for scaling pharmacokinetics through physiologically based approaches. However, few attempts have been made to evaluate the projection of f u values under pathological conditions. The primary objective of this study was to predict f u values (n = 105) of 56 compounds with or without the information of predominant binding protein in patients with varying degrees of hepatic insufficiency by accounting for quantitative changes in molar concentrations of either the major binding protein or albumin plus alpha 1-acid glycoprotein associated with differing levels of hepatic dysfunction. For the purpose of scaling, data pertaining to albumin and α1-acid glycoprotein levels in response to differing degrees of hepatic impairment were systematically collected from 919 adult donors. The results of the present study demonstrate for the first time the feasibility of physiologically based scaling f u in hepatic dysfunction after verifying with experimentally measured data of a wide variety of compounds from individuals with varying degrees of hepatic insufficiency. Furthermore, the high level of predictive accuracy indicates that the inter-relation between the severity of hepatic impairment and these plasma protein levels are physiologically accurate. The present study enhances the confidence in predicting f u in hepatic insufficiency, particularly for albumin-bound drugs. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Quantitative Mineral Resource Assessment of Copper, Molybdenum, Gold, and Silver in Undiscovered Porphyry Copper Deposits in the Andes Mountains of South America

    USGS Publications Warehouse

    Cunningham, Charles G.; Zappettini, Eduardo O.; Vivallo S., Waldo; Celada, Carlos Mario; Quispe, Jorge; Singer, Donald A.; Briskey, Joseph A.; Sutphin, David M.; Gajardo M., Mariano; Diaz, Alejandro; Portigliati, Carlos; Berger, Vladimir I.; Carrasco, Rodrigo; Schulz, Klaus J.

    2008-01-01

    Quantitative information on the general locations and amounts of undiscovered porphyry copper resources of the world is important to exploration managers, land-use and environmental planners, economists, and policy makers. This publication contains the results of probabilistic estimates of the amounts of copper (Cu), molybdenum (Mo), gold (Au), and silver (Ag) in undiscovered porphyry copper deposits in the Andes Mountains of South America. The methodology used to make these estimates is called the 'Three-Part Form'. It was developed to explicitly express estimates of undiscovered resources and associated uncertainty in a form that allows economic analysis and is useful to decisionmakers. The three-part form of assessment includes: (1) delineation of tracts of land where the geology is permissive for porphyry copper deposits to form; (2) selection of grade and tonnage models appropriate for estimating grades and tonnages of the undiscovered porphyry copper deposits in each tract; and (3) estimation of the number of undiscovered porphyry copper deposits in each tract consistent with the grade and tonnage model. A Monte Carlo simulation computer program (EMINERS) was used to combine the probability distributions of the estimated number of undiscovered deposits, the grades, and the tonnages of the selected model to obtain the probability distributions for undiscovered metals in each tract. These distributions of grades and tonnages then can be used to conduct economic evaluations of undiscovered resources in a format usable by decisionmakers. Economic evaluations are not part of this report. The results of this assessment are presented in two principal parts. The first part identifies 26 regional tracts of land where the geology is permissive for the occurrence of undiscovered porphyry copper deposits of Phanerozoic age to a depth of 1 km below the Earth's surface. These tracts are believed to contain most of South America's undiscovered resources of copper. The second part presents probabilistic estimates of the amounts of copper, molybdenum, gold, and silver in undiscovered porphyry copper deposits in each tract. The study also provides tables showing the location, tract number, and age (if available) of discovered deposits and prospects. For each of the 26 permissive tracts delineated in this study, summary information is provided on: (1) the rationale for delineating the tract; (2) the rationale for choosing the mineral deposit model used to assess the tract; (3) discovered deposits and prospects; (4) exploration history; and (5) the distribution of undiscovered deposits in the tract. The scale used to evaluate geologic information and draw tracts is 1:1,000,000.

  6. Cost-utility analysis of the housing and health intervention for homeless and unstably housed persons living with HIV.

    PubMed

    Holtgrave, David R; Wolitski, Richard J; Pals, Sherri L; Aidala, Angela; Kidder, Daniel P; Vos, David; Royal, Scott; Iruka, Nkemdiri; Briddell, Kate; Stall, Ron; Bendixen, Arturo Valdivia

    2013-06-01

    We present a cost-utility analysis based on data from the Housing and Health (H&H) Study of rental assistance for homeless and unstably housed persons living with HIV in Baltimore, Chicago and Los Angeles. As-treated analyses found favorable associations of housing with HIV viral load, emergency room use, and perceived stress (an outcome that can be quantitatively linked to quality of life). We combined these outcome data with information on intervention costs to estimate the cost-per-quality-adjusted-life-year (QALY) saved. We estimate that the cost-per-QALY-saved by the HIV-related housing services is $62,493. These services compare favorably (in terms of cost-effectiveness) to other well-accepted medical and public health services.

  7. Estimation of Metabolism Characteristics for Heat-Injured Bacteria Using Dielectrophoretic Impedance Measurement Method

    NASA Astrophysics Data System (ADS)

    Amako, Eri; Enjoji, Takaharu; Uchida, Satoshi; Tochikubo, Fumiyoshi

    Constant monitoring and immediate control of fermentation processes have been required for advanced quality preservation in food industry. In the present work, simple estimation of metabolic states for heat-injured Escherichia coli (E. coli) in a micro-cell was investigated using dielectrophoretic impedance measurement (DEPIM) method. Temporal change in the conductance between micro-gap (ΔG) was measured for various heat treatment temperatures. In addition, the dependence of enzyme activity, growth capacity and membrane situation for E. coli on heat treatment temperature was also analyzed with conventional biological methods. Consequently, a correlation between ΔG and those biological properties was obtained quantitatively. This result suggests that DEPIM method will be available for an effective monitoring technique for complex change in various biological states of microorganisms.

  8. Estimating weak ratiometric signals in imaging data. II. Meta-analysis with multiple, dual-channel datasets.

    PubMed

    Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D

    2008-09-01

    Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.

  9. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    PubMed

    Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X

    2016-05-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  10. A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex.

    PubMed

    Hellwig, B

    2000-02-01

    This study provides a detailed quantitative estimate for local synaptic connectivity between neocortical pyramidal neurons. A new way of obtaining such an estimate is presented. In acute slices of the rat visual cortex, four layer 2 and four layer 3 pyramidal neurons were intracellularly injected with biocytin. Axonal and dendritic arborizations were three-dimensionally reconstructed with the aid of a computer-based camera lucida system. In a computer experiment, pairs of pre- and postsynaptic neurons were formed and potential synaptic contacts were calculated. For each pair, the calculations were carried out for a whole range of distances (0 to 500 microm) between the presynaptic and the postsynaptic neuron, in order to estimate cortical connectivity as a function of the spatial separation of neurons. It was also differentiated whether neurons were situated in the same or in different cortical layers. The data thus obtained was used to compute connection probabilities, the average number of contacts between neurons, the frequency of specific numbers of contacts and the total number of contacts a dendritic tree receives from the surrounding cortical volume. Connection probabilities ranged from 50% to 80% for directly adjacent neurons and from 0% to 15% for neurons 500 microm apart. In many cases, connections were mediated by one contact only. However, close neighbors made on average up to 3 contacts with each other. The question as to whether the method employed in this study yields a realistic estimate of synaptic connectivity is discussed. It is argued that the results can be used as a detailed blueprint for building artificial neural networks with a cortex-like architecture.

  11. Molecular Quantification of Zooplankton Gut Content: The Case For qPCR

    NASA Astrophysics Data System (ADS)

    Frischer, M. E.; Walters, T. L.; Gibson, D. M.; Nejstgaard, J. C.; Troedsson, C.

    2016-02-01

    The ability to obtain information about feeding selectivity and rates in situ for zooplankton is vital for understanding the mechanisms structuring marine ecosystems. However, directly estimating feeding selection and rates of zooplankton, without bias, associated with culturing conditions has been notoriously difficult. A potential approach for addressing this problem is to target prey-specific DNA as a marker for prey ingestion and selection. In this study we report the development of a differential length amplification quantitative PCR (dla-qPCR) assay targeting the 18S rRNA gene to validate the use of a DNA-based approach to quantify consumption of specific plankton prey by the pelagic tunicate (doliolid) Dolioletta gegenbauri. Compared to copepods and other marine animals, the digestion of prey genomic DNA inside the gut of doliolids is low. This method minimizes potential underestimations, and therefore allows prey DNA to be used as an effective indicator of prey consumption. We also present an initial application of a qPCR-assay to estimate consumption of specific prey species on the southeastern continental shelf of the U.S., where doliolids stochastically bloom in response to upwelling events. Estimated feeding rates, based on qPCR, were in the same range as those estimated from clearance rates in laboratory feeding studies. In the field, consumption of specific prey, including the centric diatom Thalassiosira spp. was detected in the gut of wild caught D. gegenbauri at the levels consistent with their abundance in the water column at the time of collection. Thus, both experimental and field investigations support the hypothesis that a qPCR approach will be useful for the quantitative investigation of the in situ diet of D. gegenbauri without introduced bias' associated with cultivation.

  12. Oligomeric cationic polymethacrylates: a comparison of methods for determining molecular weight.

    PubMed

    Locock, Katherine E S; Meagher, Laurence; Haeussler, Matthias

    2014-02-18

    This study compares three common laboratory methods, size-exclusion chromatography (SEC), (1)H nuclear magnetic resonance (NMR), and matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF), to determine the molecular weight of oligomeric cationic copolymers. The potential bias for each method was examined across a series of polymers that varied in molecular weight and cationic character (both choice of cation (amine versus guanidine) and relative proportion present). SEC was found to be the least accurate, overestimating Mn by an average of 140%, owing to the lack of appropriate cationic standards available, and the complexity involved in estimating the hydrodynamic volume of copolymers. MALDI-TOF approximated Mn well for the highly monodisperse (Đ < 1.1), low molecular weight (degree of polymerization (DP) <50) species but appeared unsuitable for the largest polymers in the series due to the mass bias associated with the technique. (1)H NMR was found to most accurately estimate Mn in this study, differing to theoretical values by only 5.2%. (1)H NMR end-group analysis is therefore an inexpensive and facile, primary quantitative method to estimate the molecular weight of oliogomeric cationic polymethacrylates if suitably distinct end-groups signals are present in the spectrum.

  13. A two-thermocouple probe technique for estimating thermocouple time constants in flows with combustion: In situ parameter identification of a first-order lag system

    NASA Astrophysics Data System (ADS)

    Tagawa, M.; Shimoji, T.; Ohta, Y.

    1998-09-01

    A two-thermocouple probe, composed of two fine-wire thermocouples of unequal diameters, is a novel technique for estimating thermocouple time constants without any dynamic calibration of the thermocouple response. This technique is most suitable for measuring fluctuating temperatures in turbulent combustion. In the present study, the reliability and applicability of this technique are appraised in a turbulent wake of a heated cylinder (without combustion). A fine-wire resistance thermometer (cold wire) of fast response is simultaneously used to provide a reference temperature. A quantitative and detailed comparison between the cold-wire measurement and the compensated thermocouple ones shows that a previous estimation scheme gives thermocouple time constants smaller than appropriate values, unless the noise in the thermocouple signals is negligible and/or the spatial resolution of the two-thermocouple probe is sufficiently high. The scheme has been improved so as to maximize the correlation coefficient between the two compensated-thermocouple outputs. The improved scheme offers better compensation of the thermocouple response. The present approach is generally applicable to in situ parameter identification of a first-order lag system.

  14. Simple estimate of entrainment rate of pollutants from a coastal discharge into the surf zone.

    PubMed

    Wong, Simon H C; Monismith, Stephen G; Boehm, Alexandria B

    2013-10-15

    Microbial pollutants from coastal discharges can increase illness risks for swimmers and cause beach advisories. There is presently no predictive model for estimating the entrainment of pollution from coastal discharges into the surf zone. We present a novel, quantitative framework for estimating surf zone entrainment of pollution at a wave-dominant open beach. Using physical arguments, we identify a dimensionless parameter equal to the quotient of the surf zone width l(sz) and the cross-flow length scale of the discharge la = M(j) (1/2)/U(sz), where M(j) is the discharge's momentum flux and U(sz) is a representative alongshore velocity in the surf zone. We conducted numerical modeling of a nonbuoyant discharge at an alongshore uniform beach with constant slope using a wave-resolving hydrodynamic model. Using results from 144 numerical experiments we develop an empirical relationship between the surf zone entrainment rate α and l(sz)/(la). The empirical relationship can reasonably explain seven measurements of surf zone entrainment at three diverse coastal discharges. This predictive relationship can be a useful tool in coastal water quality management and can be used to develop predictive beach water quality models.

  15. Absolute measures of the completeness of the fossil record

    NASA Technical Reports Server (NTRS)

    Foote, M.; Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)

    1999-01-01

    Measuring the completeness of the fossil record is essential to understanding evolution over long timescales, particularly when comparing evolutionary patterns among biological groups with different preservational properties. Completeness measures have been presented for various groups based on gaps in the stratigraphic ranges of fossil taxa and on hypothetical lineages implied by estimated evolutionary trees. Here we present and compare quantitative, widely applicable absolute measures of completeness at two taxonomic levels for a broader sample of higher taxa of marine animals than has previously been available. We provide an estimate of the probability of genus preservation per stratigraphic interval, and determine the proportion of living families with some fossil record. The two completeness measures use very different data and calculations. The probability of genus preservation depends almost entirely on the Palaeozoic and Mesozoic records, whereas the proportion of living families with a fossil record is influenced largely by Cenozoic data. These measurements are nonetheless highly correlated, with outliers quite explicable, and we find that completeness is rather high for many animal groups.

  16. URANS simulations of the tip-leakage cavitating flow with verification and validation procedures

    NASA Astrophysics Data System (ADS)

    Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin

    2018-04-01

    In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.

  17. Epidemiological studies on radiation carcinogenesis in human populations following acute exposure: nuclear explosions and medical radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabrikant, J.I.

    1982-08-01

    The present review provides an understanding of our current knowledge of the carcinogenic effect of low-dose radiation in man, and surveys the epidemiological studies of human populations exposed to nuclear explosions and medical radiation. Discussion centers on the contributions of quantitative epidemiology to present knowledge, the reliability of the dose-incidence data, and those relevant epidemiological studies that provide the most useful information for risk estimation of cancer-induction in man. Reference is made to dose-incidence relationships from laboratory animal experiments where they may obtain for problems and difficulties in extrapolation from data obtained at high doses to low doses, and frommore » animal data to the human situation. The paper describes the methods of application of such epidemiological data for estimation of excess risk of radiation-induced cancer in exposed human populations, and discusses the strengths and limitations of epidemiology in guiding radiation protection philosophy and public health policy.« less

  18. Epidemiological studies on radiation carcinogenesis in human populations following acute exposure: nuclear explosions and medical radiation.

    PubMed Central

    Fabrikant, J. I.

    1981-01-01

    The present review provides an understanding of our current knowledge of the carcinogenic effect of low-dose radiation in man, and surveys the epidemiological studies of human populations exposed to nuclear explosions and medical radiation. Discussion centers on the contributions of quantitative epidemiology to present knowledge, the reliability of the dose-incidence data, and those relevant epidemiological studies that provide the most useful information for risk estimation of cancer induction in man. Reference is made to dose-incidence relationships from laboratory animal experiments where they may obtain, for problems and difficulties in extrapolation from data obtained at high doses to low doses, and from animal data to the human situation. The paper describes the methods of application of such epidemiological data for estimation of excess risk of radiation-induced cancer in exposed human populations and discusses the strengths and limitations of epidemiology in guiding radiation protection philosophy and public health policy. PMID:7043913

  19. A framework for nowcasting and forecasting of rainfall-triggered landslide activity using remotely sensed data

    NASA Astrophysics Data System (ADS)

    Kirschbaum, Dalia; Stanley, Thomas

    2016-04-01

    Remote sensing data offers the unique perspective to provide situational awareness of hydrometeorological hazards over large areas in a way that is impossible to achieve with in situ data. Recent work has shown that rainfall-triggered landslides, while typically local hazards that occupy small spatial areas, can be approximated over regional or global scales in near real-time. This work presents a regional and global approach to approximating potential landslide activity using the landslide hazard assessment for situational awareness (LHASA) model. This system couples remote sensing data, including Global Precipitation Measurement rainfall data, Shuttle Radar Topography Mission and other surface variables to estimate where and when landslide activity may be likely. This system also evaluates the effectiveness of quantitative precipitation estimates from the Goddard Earth Observing System Model, Version 5 to provide a 24 forecast of potential landslide activity. Preliminary results of the LHASA model and implications for are presented for a regional version of this system in Central America as well as a prototype global approach.

  20. A Generalized Multi-Phase Framework for Modeling Cavitation in Cryogenic Fluids

    NASA Technical Reports Server (NTRS)

    Dorney, Dan (Technical Monitor); Hosangadi, Ashvin; Ahuja, Vineet

    2003-01-01

    A generalized multi-phase formulation for cavitation in fluids operating at temperatures elevated relative to their critical temperatures is presented. The thermal effects and the accompanying property variations due to phase change are modeled rigorously. Thermal equilibrium is assumed and fluid thermodynamic properties are specified along the saturation line using the NIST-12 databank. Fundamental changes in the physical characteristics of the cavity when thermal effects become pronounced are identified; the cavity becomes more porous, the interface less distinct, and has increased entrainment when temperature variations are present. Quantitative estimates of temperature and pressure depressions in both liquid nitrogen and liquid hydrogen were computed and compared with experimental data of Hord for hydrofoils. Excellent estimates of the leading edge temperature and pressure depression were obtained while the comparisons in the cavity closure region were reasonable. Liquid nitrogen cavities were consistently found to be in thermal equilibrium while liquid hydrogen cavities exhibited small, but distinct, non-equilibrium effects.

  1. A quantitative assessment of the risk for highly pathogenic avian influenza introduction into Spain via legal trade of live poultry.

    PubMed

    Sánchez-Vizcaíno, Fernando; Perez, Andrés; Lainez, Manuel; Sánchez-Vizcaíno, José Manuel

    2010-05-01

    Highly pathogenic avian influenza (HPAI) is considered one of the most important diseases of poultry. During the last 9 years, HPAI epidemics have been reported in Asia, the Americas, Africa, and in 18 countries of the European Union (EU). For that reason, it is possible that the risk for HPAI virus (HPAIV) introduction into Spain may have recently increased. Because of the EU free-trade policy and because legal trade of live poultry was considered an important route for HPAI spread in certain regions of the world, there are fears that Spain may become HPAIV-infected as a consequence of the legal introduction of live poultry. However, no quantitative assessment of the risk for HPAIV introduction into Spain or into any other EU member state via the trade of poultry has been published in the peer-reviewed literature. This article presents the results of the first quantitative assessment of the risk for HPAIV introduction into a free country via legal trade of live poultry, along with estimates of the geographical variation of the risk and of the relative contribution of exporting countries and susceptible poultry species to the risk. The annual mean risk for HPAI introduction into Spain was estimated to be as low as 1.36 x 10(-3), suggesting that under prevailing conditions, introduction of HPAIV into Spain through the trade of live poultry is unlikely to occur. Moreover, these results support the hypothesis that legal trade of live poultry does not impose a significant risk for the spread of HPAI into EU member states.

  2. Connecting Taxon-Specific Microbial Activities to Carbon Cycling in the Rhizosphere

    NASA Astrophysics Data System (ADS)

    Hungate, B. A.; Morrissey, E.; Schwartz, E.; Dijkstra, P.; Blazewicz, S.; Pett-Ridge, J.; Koch, G. W.; Marks, J.; Koch, B.; McHugh, T. A.; Mau, R. L.; Hayer, M.

    2016-12-01

    Plant carbon inputs influence microbial growth in the rhizosphere, but the quantitative details of these effects are not well understood, nor are their consequences for carbon cycling in the rhizosphere. With a new pulse of carbon input to soil, which microbial taxa increase their growth rates, and by how much? Do any microbial taxa respond negatively? And how does the extra carbon addition alter the utilization of other resources, including other carbon sources, as well as inorganic nitrogen? This talk will present new research using quantitative stable isotope probing that reveals the distribution of growth responses among microbial taxa, from positive to neutral to negative, and how these growth responses are associated with various substrates. For example, decomposition of soil C in response to added labile carbon occurred as a phylogenetically-diverse majority of taxa shifted toward soil C use for growth. In contrast, bacteria with suppressed growth or that relied directly on glucose for growth clustered strongly by phylogeny. These results suggest that priming is a prototypical response of bacteria to sustained labile C addition, consistent with the widespread occurrence of the priming effect in nature. These results also illustrate the potential power of molecular tools and models that seek to estimate metrics directly relevant to quantitative ecology and biogeochemistry, moreso than is the standard currently in microbial ecology. Tools that estimate growth rate, mortality rate, and rates of substrate use - all quantified with the taxonomic precision afforded by modern sequencing - provide a foundation for quantifying the biogeochemical significance of microbial biodiversity, and a more complete understanding of the rich ecosystem of the rhizosphere.

  3. A fast signal subspace approach for the determination of absolute levels from phased microphone array measurements

    NASA Astrophysics Data System (ADS)

    Sarradj, Ennes

    2010-04-01

    Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.

  4. Development of advanced methods for analysis of experimental data in diffusion

    NASA Astrophysics Data System (ADS)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.

  5. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    EPA Science Inventory

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  6. 40 CFR 125.95 - As an owner or operator of a Phase II existing facility, what must I collect and submit when I...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...

  7. 40 CFR 125.95 - As an owner or operator of a Phase II existing facility, what must I collect and submit when I...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...

  8. 40 CFR 125.95 - As an owner or operator of a Phase II existing facility, what must I collect and submit when I...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...

  9. 40 CFR 125.95 - As an owner or operator of a Phase II existing facility, what must I collect and submit when I...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...

  10. 40 CFR 125.95 - As an owner or operator of a Phase II existing facility, what must I collect and submit when I...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...

  11. A theoretical/experimental program to develop active optical pollution sensors: Quantitative remote Raman lidar measurements of pollutants from stationary sources

    NASA Technical Reports Server (NTRS)

    Poultney, S. K.; Brumfield, M. L.; Siviter, J. S.

    1975-01-01

    Typical pollutant gas concentrations at the stack exits of stationary sources can be estimated to be about 500 ppm under the present emission standards. Raman lidar has a number of advantages which makes it a valuable tool for remote measurements of these stack emissions. Tests of the Langley Research Center Raman lidar at a calibration tank indicate that night measurements of SO2 concentrations and stack opacity are possible. Accuracies of 10 percent are shown to be achievable from a distance of 300 m within 30 min integration times for 500 ppm SO2 at the stack exits. All possible interferences were examined quantitatively (except for the fluorescence of aerosols in actual stack emissions) and found to have negligible effect on the measurements. An early test at an instrumented stack is strongly recommended.

  12. Natural extension of fast-slow decomposition for dynamical systems

    NASA Astrophysics Data System (ADS)

    Rubin, J. E.; Krauskopf, B.; Osinga, H. M.

    2018-01-01

    Modeling and parameter estimation to capture the dynamics of physical systems are often challenging because many parameters can range over orders of magnitude and are difficult to measure experimentally. Moreover, selecting a suitable model complexity requires a sufficient understanding of the model's potential use, such as highlighting essential mechanisms underlying qualitative behavior or precisely quantifying realistic dynamics. We present an approach that can guide model development and tuning to achieve desired qualitative and quantitative solution properties. It relies on the presence of disparate time scales and employs techniques of separating the dynamics of fast and slow variables, which are well known in the analysis of qualitative solution features. We build on these methods to show how it is also possible to obtain quantitative solution features by imposing designed dynamics for the slow variables in the form of specified two-dimensional paths in a bifurcation-parameter landscape.

  13. Recent concepts in missions to Mars - Extraterrestrial processes

    NASA Technical Reports Server (NTRS)

    Ramohalli, K. N.; Ash, R. L.; Lawton, E. A.; French, J. R.; Frisbee, R. H.

    1986-01-01

    This paper presents some recent concepts in Mars Sample Return (MSR) missions that utilize extraterrestrial resources. The concepts examined include the power and energy needs of this mission. It is shown that solar energy is not especially attractive. Radioisotopic power generator and a Rankine cycle use are seen to be viable options. Quantitative estimates, taking into consideration state-of-the-art and projected technologies indicate that the power/energy per se is not critical to the mission - but reliability is. Hence, various modern options for the components of the power generation and utilization are discussed. The dramatic savings in Shuttle (or other) vehicle launches are quantitatively plotted. The basic system that is discussed here is the production of hydrocarbon (methane) fuel and oxygen from Martian atmosphere. For the simplest mission, it is seen that earth-carried methane burned with oxygen produced on site provides the best system.

  14. Can genetics help psychometrics? Improving dimensionality assessment through genetic factor modeling.

    PubMed

    Franić, Sanja; Dolan, Conor V; Borsboom, Denny; Hudziak, James J; van Beijsterveldt, Catherina E M; Boomsma, Dorret I

    2013-09-01

    In the present article, we discuss the role that quantitative genetic methodology may play in assessing and understanding the dimensionality of psychological (psychometric) instruments. Specifically, we study the relationship between the observed covariance structures, on the one hand, and the underlying genetic and environmental influences giving rise to such structures, on the other. We note that this relationship may be such that it hampers obtaining a clear estimate of dimensionality using standard tools for dimensionality assessment alone. One situation in which dimensionality assessment may be impeded is that in which genetic and environmental influences, of which the observed covariance structure is a function, differ from each other in structure and dimensionality. We demonstrate that in such situations settling dimensionality issues may be problematic, and propose using quantitative genetic modeling to uncover the (possibly different) dimensionalities of the underlying genetic and environmental structures. We illustrate using simulations and an empirical example on childhood internalizing problems.

  15. A practical guide to value of information analysis.

    PubMed

    Wilson, Edward C F

    2015-02-01

    Value of information analysis is a quantitative method to estimate the return on investment in proposed research projects. It can be used in a number of ways. Funders of research may find it useful to rank projects in terms of the expected return on investment from a variety of competing projects. Alternatively, trialists can use the principles to identify the efficient sample size of a proposed study as an alternative to traditional power calculations, and finally, a value of information analysis can be conducted alongside an economic evaluation as a quantitative adjunct to the 'future research' or 'next steps' section of a study write up. The purpose of this paper is to present a brief introduction to the methods, a step-by-step guide to calculation and a discussion of issues that arise in their application to healthcare decision making. Worked examples are provided in the accompanying online appendices as Microsoft Excel spreadsheets.

  16. Mechanism on brain information processing: Energy coding

    NASA Astrophysics Data System (ADS)

    Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa

    2006-09-01

    According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.

  17. Quantitative structure-toxicity relationship (QSTR) studies on the organophosphate insecticides.

    PubMed

    Can, Alper

    2014-11-04

    Organophosphate insecticides are the most commonly used pesticides in the world. In this study, quantitative structure-toxicity relationship (QSTR) models were derived for estimating the acute oral toxicity of organophosphate insecticides to male rats. The 20 chemicals of the training set and the seven compounds of the external testing set were described by means of using descriptors. Descriptors for lipophilicity, polarity and molecular geometry, as well as quantum chemical descriptors for energy were calculated. Model development to predict toxicity of organophosphate insecticides in different matrices was carried out using multiple linear regression. The model was validated internally and externally. In the present study, QSTR model was used for the first time to understand the inherent relationships between the organophosphate insecticide molecules and their toxicity behavior. Such studies provide mechanistic insight about structure-toxicity relationship and help in the design of less toxic insecticides. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  19. Energy coding in biological neural networks

    PubMed Central

    Zhang, Zhikang

    2007-01-01

    According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, we present a brand new scientific theory that offers an unique mechanism for brain information processing. We demonstrate that the neural coding produced by the activity of the brain is well described by our theory of energy coding. Due to the energy coding model’s ability to reveal mechanisms of brain information processing based upon known biophysical properties, we can not only reproduce various experimental results of neuro-electrophysiology, but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, we estimate that the theory has very important consequences for quantitative research of cognitive function. PMID:19003513

  20. Simultaneous determination of effective carrier lifetime and resistivity of Si wafers using the nonlinear nature of photocarrier radiometric signals

    NASA Astrophysics Data System (ADS)

    Sun, Qiming; Melnikov, Alexander; Wang, Jing; Mandelis, Andreas

    2018-04-01

    A rigorous treatment of the nonlinear behavior of photocarrier radiometric (PCR) signals is presented theoretically and experimentally for the quantitative characterization of semiconductor photocarrier recombination and transport properties. A frequency-domain model based on the carrier rate equation and the classical carrier radiative recombination theory was developed. The derived concise expression reveals different functionalities of the PCR amplitude and phase channels: the phase bears direct quantitative correlation with the carrier effective lifetime, while the amplitude versus the estimated photocarrier density dependence can be used to extract the equilibrium majority carrier density and thus, resistivity. An experimental ‘ripple’ optical excitation mode (small modulation depth compared to the dc level) was introduced to bypass the complicated ‘modulated lifetime’ problem so as to simplify theoretical interpretation and guarantee measurement self-consistency and reliability. Two Si wafers with known resistivity values were tested to validate the method.

  1. The effect of the water on the curcumin tautomerism: A quantitative approach

    NASA Astrophysics Data System (ADS)

    Manolova, Yana; Deneva, Vera; Antonov, Liudmil; Drakalska, Elena; Momekova, Denitsa; Lambov, Nikolay

    2014-11-01

    The tautomerism of curcumin has been investigated in ethanol/water binary mixtures by using UV-Vis spectroscopy and advanced quantum-chemical calculations. The spectral changes were processed by using advanced chemometric procedure, based on resolution of overlapping bands technique. As a result, molar fractions of the tautomers and their individual spectra have been estimated. It has been shown that in ethanol the enol-keto tautomer only is presented. The addition of water leads to appearance of a new spectral band, which was assigned to the diketo tautomeric form. The results show that in 90% water/10% ethanol the diketo form is dominating. The observed shift in the equilibrium is explained by the quantum chemical calculations, which show that water molecules stabilize diketo tautomer through formation of stable complexes. To our best knowledge we report for the first time quantitative data for the tautomerism of curcumin and the effect of the water.

  2. Performance prediction of electrohydrodynamic thrusters by the perturbation method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shibata, H., E-mail: shibata@daedalus.k.u-tokyo.ac.jp; Watanabe, Y.; Suzuki, K.

    2016-05-15

    In this paper, we present a novel method for analyzing electrohydrodynamic (EHD) thrusters. The method is based on a perturbation technique applied to a set of drift-diffusion equations, similar to the one introduced in our previous study on estimating breakdown voltage. The thrust-to-current ratio is generalized to represent the performance of EHD thrusters. We have compared the thrust-to-current ratio obtained theoretically with that obtained from the proposed method under atmospheric air conditions, and we have obtained good quantitative agreement. Also, we have conducted a numerical simulation in more complex thruster geometries, such as the dual-stage thruster developed by Masuyama andmore » Barrett [Proc. R. Soc. A 469, 20120623 (2013)]. We quantitatively clarify the fact that if the magnitude of a third electrode voltage is low, the effective gap distance shortens, whereas if the magnitude of the third electrode voltage is sufficiently high, the effective gap distance lengthens.« less

  3. Semi-quantitative estimation by IR of framework, extraframework and defect Al species of HBEA zeolites.

    PubMed

    Marques, João P; Gener, Isabelle; Ayrault, Philippe; Lopes, José M; Ribeiro, F Ramôa; Guisnet, Michel

    2004-10-21

    A simple method based on the characterization (composition, Bronsted and Lewis acidities) of acid treated HBEA zeolites was developed for estimating the concentrations of framework, extraframework and defect Al species.

  4. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    NASA Astrophysics Data System (ADS)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  5. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Treesearch

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  6. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  7. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  8. Assessing the risk posed by natural hazards to infrastructures

    NASA Astrophysics Data System (ADS)

    Eidsvig, Unni Marie K.; Kristensen, Krister; Vidar Vangelsten, Bjørn

    2017-03-01

    This paper proposes a model for assessing the risk posed by natural hazards to infrastructures, with a focus on the indirect losses and loss of stability for the population relying on the infrastructure. The model prescribes a three-level analysis with increasing level of detail, moving from qualitative to quantitative analysis. The focus is on a methodology for semi-quantitative analyses to be performed at the second level. The purpose of this type of analysis is to perform a screening of the scenarios of natural hazards threatening the infrastructures, identifying the most critical scenarios and investigating the need for further analyses (third level). The proposed semi-quantitative methodology considers the frequency of the natural hazard, different aspects of vulnerability, including the physical vulnerability of the infrastructure itself, and the societal dependency on the infrastructure. An indicator-based approach is applied, ranking the indicators on a relative scale according to pre-defined ranking criteria. The proposed indicators, which characterise conditions that influence the probability of an infrastructure malfunctioning caused by a natural event, are defined as (1) robustness and buffer capacity, (2) level of protection, (3) quality/level of maintenance and renewal, (4) adaptability and quality of operational procedures and (5) transparency/complexity/degree of coupling. Further indicators describe conditions influencing the socio-economic consequences of the infrastructure malfunctioning, such as (1) redundancy and/or substitution, (2) cascading effects and dependencies, (3) preparedness and (4) early warning, emergency response and measures. The aggregated risk estimate is a combination of the semi-quantitative vulnerability indicators, as well as quantitative estimates of the frequency of the natural hazard, the potential duration of the infrastructure malfunctioning (e.g. depending on the required restoration effort) and the number of users of the infrastructure. Case studies for two Norwegian municipalities are presented for demonstration purposes, where risk posed by adverse weather and natural hazards to primary road, water supply and power networks is assessed. The application examples show that the proposed model provides a useful tool for screening of potential undesirable events, contributing to a targeted reduction of the risk.

  9. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.

  10. Simultaneous estimation of diet composition and calibration coefficients with fatty acid signature data

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.

    2017-01-01

    Knowledge of animal diets provides essential insights into their life history and ecology, although diet estimation is challenging and remains an active area of research. Quantitative fatty acid signature analysis (QFASA) has become a popular method of estimating diet composition, especially for marine species. A primary assumption of QFASA is that constants called calibration coefficients, which account for the differential metabolism of individual fatty acids, are known. In practice, however, calibration coefficients are not known, but rather have been estimated in feeding trials with captive animals of a limited number of model species. The impossibility of verifying the accuracy of feeding trial derived calibration coefficients to estimate the diets of wild animals is a foundational problem with QFASA that has generated considerable criticism. We present a new model that allows simultaneous estimation of diet composition and calibration coefficients based only on fatty acid signature samples from wild predators and potential prey. Our model performed almost flawlessly in four tests with constructed examples, estimating both diet proportions and calibration coefficients with essentially no error. We also applied the model to data from Chukchi Sea polar bears, obtaining diet estimates that were more diverse than estimates conditioned on feeding trial calibration coefficients. Our model avoids bias in diet estimates caused by conditioning on inaccurate calibration coefficients, invalidates the primary criticism of QFASA, eliminates the need to conduct feeding trials solely for diet estimation, and consequently expands the utility of fatty acid data to investigate aspects of ecology linked to animal diets.

  11. Evaluating a multi-criteria model for hazard assessment in urban design. The Porto Marghera case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luria, Paolo; Aspinall, Peter A

    2003-08-01

    The aim of this paper is to describe a new approach to major industrial hazard assessment, which has been recently studied by the authors in conjunction with the Italian Environmental Protection Agency ('ARPAV'). The real opportunity for developing a different approach arose from the need of the Italian EPA to provide the Venice Port Authority with an appropriate estimation of major industrial hazards in Porto Marghera, an industrial estate near Venice (Italy). However, the standard model, the quantitative risk analysis (QRA), only provided a list of individual quantitative risk values, related to single locations. The experimental model is based onmore » a multi-criteria approach--the Analytic Hierarchy Process--which introduces the use of expert opinions, complementary skills and expertise from different disciplines in conjunction with quantitative traditional analysis. This permitted the generation of quantitative data on risk assessment from a series of qualitative assessments, on the present situation and on three other future scenarios, and use of this information as indirect quantitative measures, which could be aggregated for obtaining the global risk rate. This approach is in line with the main concepts proposed by the last European directive on Major Hazard Accidents, which recommends increasing the participation of operators, taking the other players into account and, moreover, paying more attention to the concepts of 'urban control', 'subjective risk' (risk perception) and intangible factors (factors not directly quantifiable)« less

  12. Autocorrelation structure of convective rainfall in semiarid-arid climate derived from high-resolution X-Band radar estimates

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2018-02-01

    Small scale rainfall variability is a key factor driving runoff response in fast responding systems, such as mountainous, urban and arid catchments. In this paper, the spatial-temporal autocorrelation structure of convective rainfall is derived with extremely high resolutions (60 m, 1 min) using estimates from an X-Band weather radar recently installed in a semiarid-arid area. The 2-dimensional spatial autocorrelation of convective rainfall fields and the temporal autocorrelation of point-wise and distributed rainfall fields are examined. The autocorrelation structures are characterized by spatial anisotropy, correlation distances 1.5-2.8 km and rarely exceeding 5 km, and time-correlation distances 1.8-6.4 min and rarely exceeding 10 min. The observed spatial variability is expected to negatively affect estimates from rain gauges and microwave links rather than satellite and C-/S-Band radars; conversely, the temporal variability is expected to negatively affect remote sensing estimates rather than rain gauges. The presented results provide quantitative information for stochastic weather generators, cloud-resolving models, dryland hydrologic and agricultural models, and multi-sensor merging techniques.

  13. Estimation of prediagnostic duration of type 2 diabetes mellitus by lens autofluorometry

    NASA Astrophysics Data System (ADS)

    Kessel, Line; Glumer, Charlotte; Larsen, Michael

    2003-10-01

    Type 2 diabetes mellitus is a global epidemic with the number of affected subjects exceeding 4% of the adult population world-wide. Undiagnosed and untreated, the disease results in long-term complications such as myocardial infarction, stroke, and blindness. Treatment reduces the number and severity of long-term complications but treatment is often delayed by a time-lag of 10 years or more from the onset of disease to diagnosis. Earlier diagnosis can be achieved by systematic screening programs but the potential time won is unknown. The aim of the present study was to develop a mathematical model estimating the prediagnostic duration of type 2 diabetes mellitus using lens autofluorescence as an indicator of lifetime glycemic load. Fluorometry of the human is lens a quantitative measurement which is attractive because of the ease by which it can be performed. It is our hope that lens fluorometry will prove useful in estimating the prediagnostic duration of type 2 diabetes mellitus in population studies, a property of profound clinical relevance that is difficult to estimate by any other currently available method.

  14. Estimation of 3-D conduction velocity vector fields from cardiac mapping data.

    PubMed

    Barnette, A R; Bayly, P V; Zhang, S; Walcott, G P; Ideker, R E; Smith, W M

    2000-08-01

    A method to estimate three-dimensional (3-D) conduction velocity vector fields in cardiac tissue is presented. The speed and direction of propagation are found from polynomial "surfaces" fitted to space-time (x, y, z, t) coordinates of cardiac activity. The technique is applied to sinus rhythm and paced rhythm mapped with plunge needles at 396-466 sites in the canine myocardium. The method was validated on simulated 3-D plane and spherical waves. For simulated data, conduction velocities were estimated with an accuracy of 1%-2%. In experimental data, estimates of conduction speeds during paced rhythm were slower than those found during normal sinus rhythm. Vector directions were also found to differ between different types of beats. The technique was able to distinguish between premature ventricular contractions and sinus beats and between sinus and paced beats. The proposed approach to computing velocity vector fields provides an automated, physiological, and quantitative description of local electrical activity in 3-D tissue. This method may provide insight into abnormal conduction associated with fatal ventricular arrhythmias.

  15. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  16. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Astrophysics Data System (ADS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  17. Precipitation Estimation from the ARM Distributed Radar Network During the MC3E Campaign

    NASA Astrophysics Data System (ADS)

    Theisen, A. K.; Giangrande, S. E.; Collis, S. M.

    2012-12-01

    The DOE - NASA Midlatitude Continental Convective Cloud Experiment (MC3E) was the first demonstration of the Atmospheric Radiation Measurement (ARM) Climate Research Facility scanning precipitation radar platforms. A goal for the MC3E field campaign over the Southern Great Plains (SGP) facility was to demonstrate the capabilities of ARM polarimetric radar systems for providing unique insights into deep convective storm evolution and microphysics. One practical application of interest for climate studies and the forcing of cloud resolving models is improved Quantitative Precipitation Estimates (QPE) from ARM radar systems positioned at SGP. This study presents the results of ARM radar-based precipitation estimates during the 2-month MC3E campaign. Emphasis is on the usefulness of polarimetric C-band radar observations (CSAPR) for rainfall estimation to distances within 100 km of the Oklahoma SGP facility. Collocated ground disdrometer resources, precipitation profiling radars and nearby surface Oklahoma Mesonet gauge records are consulted to evaluate potential ARM radar-based rainfall products and optimal methods. Rainfall products are also evaluated against the regional NEXRAD-standard observations.

  18. Oxidative DNA damage background estimated by a system model of base excision repair

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokhansanj, B A; Wilson, III, D M

    Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less

  19. Effects of finite spatial resolution on quantitative CBF images from dynamic PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phelps, M.E.; Huang, S.C.; Mahoney, D.K.

    1985-05-01

    The finite spatial resolution of PET causes the time-activity responses on pixels around the boundaries between gray and white matter regions to contain kinetic components from tissues of different CBF's. CBF values estimated from kinetics of such mixtures are underestimated because of the nonlinear relationship between the time-activity response and the estimated CBF. Computer simulation is used to investigate these effects on phantoms of circular structures and realistic brain slice in terms of object size and quantitative CBF values. The CBF image calculated is compared to the case of having resolution loss alone. Results show that the size of amore » high flow region in the CBF image is decreased while that of a low flow region is increased. For brain phantoms, the qualitative appearance of CBF images is not seriously affected, but the estimated CBF's are underestimated by 11 to 16 percent in local gray matter regions (of size 1 cm/sup 2/) with about 14 percent reduction in global CBF over the whole slice. It is concluded that the combined effect of finite spatial resolution and the nonlinearity in estimating CBF from dynamic PET is quite significant and must be considered in processing and interpreting quantitative CBF images.« less

  20. Intraoperative perception and estimates on extent of resection during awake glioma surgery: overcoming the learning curve.

    PubMed

    Lau, Darryl; Hervey-Jumper, Shawn L; Han, Seunggu J; Berger, Mitchel S

    2018-05-01

    OBJECTIVE There is ample evidence that extent of resection (EOR) is associated with improved outcomes for glioma surgery. However, it is often difficult to accurately estimate EOR intraoperatively, and surgeon accuracy has yet to be reviewed. In this study, the authors quantitatively assessed the accuracy of intraoperative perception of EOR during awake craniotomy for tumor resection. METHODS A single-surgeon experience of performing awake craniotomies for tumor resection over a 17-year period was examined. Retrospective review of operative reports for quantitative estimation of EOR was recorded. Definitive EOR was based on postoperative MRI. Analysis of accuracy of EOR estimation was examined both as a general outcome (gross-total resection [GTR] or subtotal resection [STR]), and quantitatively (5% within EOR on postoperative MRI). Patient demographics, tumor characteristics, and surgeon experience were examined. The effects of accuracy on motor and language outcomes were assessed. RESULTS A total of 451 patients were included in the study. Overall accuracy of intraoperative perception of whether GTR or STR was achieved was 79.6%, and overall accuracy of quantitative perception of resection (within 5% of postoperative MRI) was 81.4%. There was a significant difference (p = 0.049) in accuracy for gross perception over the 17-year period, with improvement over the later years: 1997-2000 (72.6%), 2001-2004 (78.5%), 2005-2008 (80.7%), and 2009-2013 (84.4%). Similarly, there was a significant improvement (p = 0.015) in accuracy of quantitative perception of EOR over the 17-year period: 1997-2000 (72.2%), 2001-2004 (69.8%), 2005-2008 (84.8%), and 2009-2013 (93.4%). This improvement in accuracy is demonstrated by the significantly higher odds of correctly estimating quantitative EOR in the later years of the series on multivariate logistic regression. Insular tumors were associated with the highest accuracy of gross perception (89.3%; p = 0.034), but lowest accuracy of quantitative perception (61.1% correct; p < 0.001) compared with tumors in other locations. Even after adjusting for surgeon experience, this particular trend for insular tumors remained true. The absence of 1p19q co-deletion was associated with higher quantitative perception accuracy (96.9% vs 81.5%; p = 0.051). Tumor grade, recurrence, diagnosis, and isocitrate dehydrogenase-1 (IDH-1) status were not associated with accurate perception of EOR. Overall, new neurological deficits occurred in 8.4% of cases, and 42.1% of those new neurological deficits persisted after the 3-month follow-up. Correct quantitative perception was associated with lower postoperative motor deficits (2.4%) compared with incorrect perceptions (8.0%; p = 0.029). There were no detectable differences in language outcomes based on perception of EOR. CONCLUSIONS The findings from this study suggest that there is a learning curve associated with the ability to accurately assess intraoperative EOR during glioma surgery, and it may take more than a decade to be truly proficient. Understanding the factors associated with this ability to accurately assess EOR will provide safer surgeries while maximizing tumor resection.

  1. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography.

    PubMed

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar

    2009-08-25

    Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  2. Detection of sea otters in boat-based surveys of Prince William Sound, Alaska

    USGS Publications Warehouse

    Udevitz, Mark S.; Bodkin, James L.; Costa, Daniel P.

    1995-01-01

    Boat-based surveys have been commonly used to monitor sea otter populations, but there has been little quantitative work to evaluate detection biases that may affect these surveys. We used ground-based observers to investigate sea otter detection probabilities in a boat-based survey of Prince William Sound, Alaska. We estimated that 30% of the otters present on surveyed transects were not detected by boat crews. Approximately half (53%) of the undetected otters were missed because the otters left the transects, apparently in response to the approaching boat. Unbiased estimates of detection probabilities will be required for obtaining unbiased population estimates from boat-based surveys of sea otters. Therefore, boat-based surveys should include methods to estimate sea otter detection probabilities under the conditions specific to each survey. Unbiased estimation of detection probabilities with ground-based observers requires either that the ground crews detect all of the otters in observed subunits, or that there are no errors in determining which crews saw each detected otter. Ground-based observer methods may be appropriate in areas where nearly all of the sea otter habitat is potentially visible from ground-based vantage points.

  3. A Spatial Method to Calculate Small-Scale Fisheries Extent

    NASA Astrophysics Data System (ADS)

    Johnson, A. F.; Moreno-Báez, M.; Giron-Nava, A.; Corominas, J.; Erisman, B.; Ezcurra, E.; Aburto-Oropeza, O.

    2016-02-01

    Despite global catch per unit effort having redoubled since the 1950's, the global fishing fleet is estimated to be twice the size that the oceans can sustainably support. In order to gauge the collateral impacts of fishing intensity, we must be able to estimate the spatial extent and amount of fishing vessels in the oceans. Methods that do currently exist are built around electronic tracking and log book systems and generally focus on industrial fisheries. Spatial extent for small-scale fisheries therefore remains elusive for many small-scale fishing fleets; even though these fisheries land the same biomass for human consumption as industrial fisheries. Current methods are data-intensive and require extensive extrapolation when estimated across large spatial scales. We present an accessible, spatial method of calculating the extent of small-scale fisheries based on two simple measures that are available, or at least easily estimable, in even the most data poor fisheries: the number of boats and the local coastal human population. We demonstrate this method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This method provides an important first step towards estimating the fishing extent of the small-scale fleet, globally.

  4. Estimating the Mg# and AlVI content of biotite and chlorite from shortwave infrared reflectance spectroscopy: Predictive equations and recommendations for their use

    NASA Astrophysics Data System (ADS)

    Lypaczewski, Philip; Rivard, Benoit

    2018-06-01

    Shortwave infrared (SWIR, 1000-2500 nm) reflectance spectra of biotite and chlorite were investigated to establish quantitative relationships between spectral metrics and mineral chemistry, determined by electron microprobe analysis (EMPA). Samples spanning a broad range of mineral compositions were used to establish regression equations to Mg#, which can be estimated to ±3 and ±5 Mg#, and to AlVI content, which can be estimated to ±0.044 AlVI (11 O) and ±0.09 AlVI (14 O), respectively for biotite and chlorite. Both minerals have absorptions at common positions (1400, 2250, 2330 nm), and spectral interference may occur in mineral mixtures. For an equivalent Mg#, absorptions of chlorite are offset to 1-15 nm higher wavelengths relative to those of biotite. If the incorrect mineral is identified, errors in the estimation of composition may occur. Additionally, the 2250 nm absorption, which is related to Al(Mg,Fe)-OH in both minerals, is strongly affected by both the AlVI content and Mg#. This can lead to erroneous Mg# estimations in low AlVI samples. Recommendations to mitigate these issues are presented.

  5. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  6. A Comparative Assessment of Greek Universities' Efficiency Using Quantitative Analysis

    ERIC Educational Resources Information Center

    Katharaki, Maria; Katharakis, George

    2010-01-01

    In part due to the increased demand for higher education, typical evaluation frameworks for universities often address the key issue of available resource utilisation. This study seeks to estimate the efficiency of 20 public universities in Greece through quantitative analysis (including performance indicators, data envelopment analysis (DEA) and…

  7. Employment from Solar Energy: A Bright but Partly Cloudy Future.

    ERIC Educational Resources Information Center

    Smeltzer, K. K.; Santini, D. J.

    A comparison of quantitative and qualitative employment effects of solar and conventional systems can prove the increased employment postulated as one of the significant secondary benefits of a shift from conventional to solar energy use. Current quantitative employment estimates show solar technology-induced employment to be generally greater…

  8. 78 FR 53336 - List of Fisheries for 2013

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... provided on the LOF are solely used for descriptive purposes and will not be used in determining future... this information to determine whether the fishery can be classified on the LOF based on quantitative... does not have a quantitative estimate of the number of mortalities and serious injuries of pantropical...

  9. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  10. 40 CFR 260.11 - References.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...

  11. 40 CFR 260.11 - References.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...

  12. 40 CFR 260.11 - References.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...

  13. ESTIMATION OF MICROBIAL REDUCTIVE TRANSFORMATION RATES FOR CHLORINATED BENZENES AND PHENOLS USING A QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIP APPROACH

    EPA Science Inventory

    A set of literature data was used to derive several quantitative structure-activity relationships (QSARs) to predict the rate constants for the microbial reductive dehalogenation of chlorinated aromatics. Dechlorination rate constants for 25 chloroaromatics were corrected for th...

  14. QUANTITATIVE EVALUATION OF BROMODICHLOROMETHANE METABOLISM BY RECOMBINANT RAT AND HUMAN CYTOCHROME P450S

    EPA Science Inventory

    ABSTRACT
    We report quantitative estimates of the parameters for metabolism of bromodichloromethane (BDCM) by recombinant preparations of hepatic cytochrome P450s (CYPs) from rat and human. BDCM is a drinking water disinfectant byproduct that has been implicated in liver, kidn...

  15. Analysis of high accuracy, quantitative proteomics data in the MaxQB database.

    PubMed

    Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias

    2012-03-01

    MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database scores. The information contained in MaxQB, including high resolution fragment spectra, is accessible to the community via a user-friendly web interface at http://www.biochem.mpg.de/maxqb.

  16. Evaluation of precipitation estimates over CONUS derived from satellite, radar, and rain gauge datasets (2002-2012)

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.

    2014-10-01

    We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets (bias-adjusted TMPA 3B42, near-real time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (± 6%). However, differences at the RFC are more important in particular for near-real time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near real time counterpart 3B42RT. However, large biases remained for 3B42 over the Western US for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in day-1) over the Northwest. Furthermore, the conditional analysis and the contingency analysis conducted illustrated the challenge of retrieving extreme precipitation from remote sensing estimates.

  17. Estimation of aquifer scale proportion using equal area grids: assessment of regional scale groundwater quality

    USGS Publications Warehouse

    Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.

    2010-01-01

    The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.

  18. Antibodies against toluene diisocyanate protein conjugates. Three methods of measurement.

    PubMed

    Patterson, R; Harris, K E; Zeiss, C R

    1983-12-01

    With the use of canine antisera against toluene diisocyanate (TDI)-dog serum albumin (DSA), techniques for measuring antibody against TDI-DSA were evaluated. The use of an ammonium sulfate precipitation assay showed suggestive evidence of antibody binding but high levels of TDI-DSA precipitation in the absence of antibody limit any usefulness of this technique. Double-antibody co-precipitation techniques will measure total antibody or Ig class antibody against 125I-TDI-DSA. These techniques are quantitative. The polystyrene tube radioimmunoassay is a highly sensitive method of detecting and quantitatively estimating IgG antibody. The enzyme linked immunosorbent assay is a rapidly adaptable method for the quantitative estimation of IgG, IgA, and IgM against TDI-homologous proteins. All these techniques were compared and results are demonstrated by using the same serum sample for analysis.

  19. Effects of a 20 year rain event: a quantitative microbial risk assessment of a case of contaminated bathing water in Copenhagen, Denmark.

    PubMed

    Andersen, S T; Erichsen, A C; Mark, O; Albrechtsen, H-J

    2013-12-01

    Quantitative microbial risk assessments (QMRAs) often lack data on water quality leading to great uncertainty in the QMRA because of the many assumptions. The quantity of waste water contamination was estimated and included in a QMRA on an extreme rain event leading to combined sewer overflow (CSO) to bathing water where an ironman competition later took place. Two dynamic models, (1) a drainage model and (2) a 3D hydrodynamic model, estimated the dilution of waste water from source to recipient. The drainage model estimated that 2.6% of waste water was left in the system before CSO and the hydrodynamic model estimated that 4.8% of the recipient bathing water came from the CSO, so on average there was 0.13% of waste water in the bathing water during the ironman competition. The total estimated incidence rate from a conservative estimate of the pathogenic load of five reference pathogens was 42%, comparable to 55% in an epidemiological study of the case. The combination of applying dynamic models and exposure data led to an improved QMRA that included an estimate of the dilution factor. This approach has not been described previously.

  20. Benefits of dynamic mobility applications : preliminary estimates from the literature.

    DOT National Transportation Integrated Search

    2012-12-01

    This white paper examines the available quantitative information on the potential mobility benefits of the connected vehicle Dynamic Mobility Applications (DMA). This work will be refined as more and better estimates of benefits from mobility applica...

Top